Test Report: Docker_Linux_crio_arm64 21767

                    
                      05a109d80d7e573d35c6ebc91a1126cc576c7968:2025-10-18:41956
                    
                

Test fail (36/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.3
35 TestAddons/parallel/Registry 16.26
36 TestAddons/parallel/RegistryCreds 0.52
37 TestAddons/parallel/Ingress 146.64
38 TestAddons/parallel/InspektorGadget 5.31
39 TestAddons/parallel/MetricsServer 5.38
41 TestAddons/parallel/CSI 58.74
42 TestAddons/parallel/Headlamp 3.25
43 TestAddons/parallel/CloudSpanner 5.27
44 TestAddons/parallel/LocalPath 8.39
45 TestAddons/parallel/NvidiaDevicePlugin 6.27
46 TestAddons/parallel/Yakd 6.28
98 TestFunctional/parallel/ServiceCmdConnect 603.57
126 TestFunctional/parallel/ServiceCmd/DeployApp 600.92
135 TestFunctional/parallel/ServiceCmd/HTTPS 0.49
136 TestFunctional/parallel/ServiceCmd/Format 0.53
137 TestFunctional/parallel/ServiceCmd/URL 0.48
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.16
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.47
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.18
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.32
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.21
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.44
191 TestJSONOutput/pause/Command 1.89
197 TestJSONOutput/unpause/Command 1.85
271 TestPause/serial/Pause 8.51
296 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.42
303 TestStartStop/group/old-k8s-version/serial/Pause 7.99
309 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.48
314 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 3.1
321 TestStartStop/group/no-preload/serial/Pause 6.3
327 TestStartStop/group/embed-certs/serial/Pause 8.48
331 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.41
336 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 3.5
341 TestStartStop/group/newest-cni/serial/Pause 6.04
348 TestStartStop/group/default-k8s-diff-port/serial/Pause 7.22
x
+
TestAddons/serial/Volcano (0.3s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-718596 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-718596 addons disable volcano --alsologtostderr -v=1: exit status 11 (297.113903ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 08:32:42.634148 1282880 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:32:42.635789 1282880 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:32:42.635804 1282880 out.go:374] Setting ErrFile to fd 2...
	I1018 08:32:42.635809 1282880 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:32:42.636142 1282880 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 08:32:42.636481 1282880 mustload.go:65] Loading cluster: addons-718596
	I1018 08:32:42.636931 1282880 config.go:182] Loaded profile config "addons-718596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:32:42.636953 1282880 addons.go:606] checking whether the cluster is paused
	I1018 08:32:42.637106 1282880 config.go:182] Loaded profile config "addons-718596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:32:42.637181 1282880 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:32:42.637834 1282880 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:32:42.655260 1282880 ssh_runner.go:195] Run: systemctl --version
	I1018 08:32:42.655316 1282880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:32:42.673322 1282880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:32:42.778309 1282880 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 08:32:42.778402 1282880 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 08:32:42.812870 1282880 cri.go:89] found id: "f2ba69481cca4d0f5a25c5beaad63e76454385180b30477c08c5dd39ec11f960"
	I1018 08:32:42.812896 1282880 cri.go:89] found id: "fee77718765ce01862d3826830d1cd1b63df4294ae5007413d147c5b6f353493"
	I1018 08:32:42.812901 1282880 cri.go:89] found id: "2b38f5de44735a72df017a6ba528430e174db28f03821d00d635d4b6bf461eb9"
	I1018 08:32:42.812904 1282880 cri.go:89] found id: "60f3656a31e3365232ea3a50fda1e9925de06d7f546e566ed63caa3b6ddc6a26"
	I1018 08:32:42.812908 1282880 cri.go:89] found id: "69416bfe918f8c0355a6e848e7c20cc2a0644f592e4af4ce241f48fa38460e38"
	I1018 08:32:42.812911 1282880 cri.go:89] found id: "28a70a60eb5fbd7784b4dc8b229decc07e1ead52bd5c852f7c9fdb71b87298e6"
	I1018 08:32:42.812914 1282880 cri.go:89] found id: "9b6001d8d045befc23b48c768a68764d88dbf272340c086878656541cf475eae"
	I1018 08:32:42.812917 1282880 cri.go:89] found id: "20dde0b8d894a22ef9350e1d77e8944af9e5892634e205a207e3b8f888c608b1"
	I1018 08:32:42.812920 1282880 cri.go:89] found id: "1a2a1784a32edcacf12954ced37433e7d2a873d907689eb4a650272d4227e914"
	I1018 08:32:42.812927 1282880 cri.go:89] found id: "41255395d4ccf30b4f83f7286e24fddc9877529056e3b6550176417a18c6ec37"
	I1018 08:32:42.812930 1282880 cri.go:89] found id: "df452f4c1f840a0419fcac5bf2f09de27afcb9908345e644113dc8d6933a8ea1"
	I1018 08:32:42.812934 1282880 cri.go:89] found id: "b7afa4a4426cb6b1ac0da51b0491d588ab2af4410fd28370bf8fa39172e5f813"
	I1018 08:32:42.812937 1282880 cri.go:89] found id: "73bba38c93d1dd848863605a0e040b6cc93f6c2ae44da4b521ffd096fc0e7cef"
	I1018 08:32:42.812940 1282880 cri.go:89] found id: "3341cb4941ef2d1b3b62f6350adf989a06c0a11fdca10b1d2a4d0be1a73e4dac"
	I1018 08:32:42.812944 1282880 cri.go:89] found id: "af3c01fc17b1cf65bfc660209278c4a4b5768989098968a12a530d54cf2e3e99"
	I1018 08:32:42.812957 1282880 cri.go:89] found id: "21a4115e68c1dc6a33076f076e3d38d44c1bd958e69b4c3ac7531a57eb042d50"
	I1018 08:32:42.812961 1282880 cri.go:89] found id: "8a8b73b00b16ea37452d2de32d3703276b55fbacd22ec17ae9cc096aec490df7"
	I1018 08:32:42.812966 1282880 cri.go:89] found id: "c13caa4e33e4b6e47382b70365473a8c332ce93c8e5eeb30bd76b72c8a77d787"
	I1018 08:32:42.812969 1282880 cri.go:89] found id: "c8f4c76b52ea3dbdf50b9feeeef868087ebe9f8d3a8596d5c5b2c3f5fe5faad5"
	I1018 08:32:42.812972 1282880 cri.go:89] found id: "d3d6b4b5a780c831e4df944a01c79fc26b915752ab82f96bfa50a8db6eeb10ca"
	I1018 08:32:42.812977 1282880 cri.go:89] found id: "b5b1a3ea57732699826bd43ca767e85f48ee04112206faf1a4a459e8820cf3d8"
	I1018 08:32:42.812985 1282880 cri.go:89] found id: "3d8f28771c74beb97c58d5f8c30b538ff9c212922906d7c5984e69d87dfde8c8"
	I1018 08:32:42.812988 1282880 cri.go:89] found id: "c60014395decc7e3a08e95d661ec9973575293fea6d721129b4f610cd9045de2"
	I1018 08:32:42.812992 1282880 cri.go:89] found id: ""
	I1018 08:32:42.813045 1282880 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 08:32:42.828372 1282880 out.go:203] 
	W1018 08:32:42.831278 1282880 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:32:42Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:32:42Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 08:32:42.831311 1282880 out.go:285] * 
	* 
	W1018 08:32:42.840649 1282880 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 08:32:42.843673 1282880 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-718596 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.30s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 7.12602ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-6wmvl" [348f7e05-5b38-49a1-93ae-2cbf48215d2e] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003070482s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-pvgzm" [d475388a-a4d8-45b8-b290-fd09079b4baf] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003383743s
addons_test.go:392: (dbg) Run:  kubectl --context addons-718596 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-718596 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-718596 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.751408628s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-718596 ip
2025/10/18 08:33:08 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-718596 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-718596 addons disable registry --alsologtostderr -v=1: exit status 11 (258.934717ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 08:33:08.206178 1283810 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:33:08.207569 1283810 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:33:08.207585 1283810 out.go:374] Setting ErrFile to fd 2...
	I1018 08:33:08.207591 1283810 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:33:08.207950 1283810 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 08:33:08.208301 1283810 mustload.go:65] Loading cluster: addons-718596
	I1018 08:33:08.208711 1283810 config.go:182] Loaded profile config "addons-718596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:33:08.208731 1283810 addons.go:606] checking whether the cluster is paused
	I1018 08:33:08.208872 1283810 config.go:182] Loaded profile config "addons-718596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:33:08.208898 1283810 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:33:08.209390 1283810 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:33:08.229762 1283810 ssh_runner.go:195] Run: systemctl --version
	I1018 08:33:08.229836 1283810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:33:08.246613 1283810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:33:08.350444 1283810 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 08:33:08.350533 1283810 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 08:33:08.382150 1283810 cri.go:89] found id: "f2ba69481cca4d0f5a25c5beaad63e76454385180b30477c08c5dd39ec11f960"
	I1018 08:33:08.382172 1283810 cri.go:89] found id: "fee77718765ce01862d3826830d1cd1b63df4294ae5007413d147c5b6f353493"
	I1018 08:33:08.382182 1283810 cri.go:89] found id: "2b38f5de44735a72df017a6ba528430e174db28f03821d00d635d4b6bf461eb9"
	I1018 08:33:08.382186 1283810 cri.go:89] found id: "60f3656a31e3365232ea3a50fda1e9925de06d7f546e566ed63caa3b6ddc6a26"
	I1018 08:33:08.382189 1283810 cri.go:89] found id: "69416bfe918f8c0355a6e848e7c20cc2a0644f592e4af4ce241f48fa38460e38"
	I1018 08:33:08.382193 1283810 cri.go:89] found id: "28a70a60eb5fbd7784b4dc8b229decc07e1ead52bd5c852f7c9fdb71b87298e6"
	I1018 08:33:08.382197 1283810 cri.go:89] found id: "9b6001d8d045befc23b48c768a68764d88dbf272340c086878656541cf475eae"
	I1018 08:33:08.382200 1283810 cri.go:89] found id: "20dde0b8d894a22ef9350e1d77e8944af9e5892634e205a207e3b8f888c608b1"
	I1018 08:33:08.382203 1283810 cri.go:89] found id: "1a2a1784a32edcacf12954ced37433e7d2a873d907689eb4a650272d4227e914"
	I1018 08:33:08.382210 1283810 cri.go:89] found id: "41255395d4ccf30b4f83f7286e24fddc9877529056e3b6550176417a18c6ec37"
	I1018 08:33:08.382218 1283810 cri.go:89] found id: "df452f4c1f840a0419fcac5bf2f09de27afcb9908345e644113dc8d6933a8ea1"
	I1018 08:33:08.382221 1283810 cri.go:89] found id: "b7afa4a4426cb6b1ac0da51b0491d588ab2af4410fd28370bf8fa39172e5f813"
	I1018 08:33:08.382224 1283810 cri.go:89] found id: "73bba38c93d1dd848863605a0e040b6cc93f6c2ae44da4b521ffd096fc0e7cef"
	I1018 08:33:08.382228 1283810 cri.go:89] found id: "3341cb4941ef2d1b3b62f6350adf989a06c0a11fdca10b1d2a4d0be1a73e4dac"
	I1018 08:33:08.382234 1283810 cri.go:89] found id: "af3c01fc17b1cf65bfc660209278c4a4b5768989098968a12a530d54cf2e3e99"
	I1018 08:33:08.382240 1283810 cri.go:89] found id: "21a4115e68c1dc6a33076f076e3d38d44c1bd958e69b4c3ac7531a57eb042d50"
	I1018 08:33:08.382243 1283810 cri.go:89] found id: "8a8b73b00b16ea37452d2de32d3703276b55fbacd22ec17ae9cc096aec490df7"
	I1018 08:33:08.382247 1283810 cri.go:89] found id: "c13caa4e33e4b6e47382b70365473a8c332ce93c8e5eeb30bd76b72c8a77d787"
	I1018 08:33:08.382250 1283810 cri.go:89] found id: "c8f4c76b52ea3dbdf50b9feeeef868087ebe9f8d3a8596d5c5b2c3f5fe5faad5"
	I1018 08:33:08.382253 1283810 cri.go:89] found id: "d3d6b4b5a780c831e4df944a01c79fc26b915752ab82f96bfa50a8db6eeb10ca"
	I1018 08:33:08.382258 1283810 cri.go:89] found id: "b5b1a3ea57732699826bd43ca767e85f48ee04112206faf1a4a459e8820cf3d8"
	I1018 08:33:08.382265 1283810 cri.go:89] found id: "3d8f28771c74beb97c58d5f8c30b538ff9c212922906d7c5984e69d87dfde8c8"
	I1018 08:33:08.382268 1283810 cri.go:89] found id: "c60014395decc7e3a08e95d661ec9973575293fea6d721129b4f610cd9045de2"
	I1018 08:33:08.382271 1283810 cri.go:89] found id: ""
	I1018 08:33:08.382320 1283810 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 08:33:08.397099 1283810 out.go:203] 
	W1018 08:33:08.399965 1283810 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:33:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:33:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 08:33:08.399989 1283810 out.go:285] * 
	* 
	W1018 08:33:08.409011 1283810 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 08:33:08.411900 1283810 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-718596 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (16.26s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.52s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.967329ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-718596
addons_test.go:332: (dbg) Run:  kubectl --context addons-718596 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-718596 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-718596 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (274.481754ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 08:33:54.460905 1285025 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:33:54.462416 1285025 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:33:54.462429 1285025 out.go:374] Setting ErrFile to fd 2...
	I1018 08:33:54.462435 1285025 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:33:54.462817 1285025 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 08:33:54.463197 1285025 mustload.go:65] Loading cluster: addons-718596
	I1018 08:33:54.463822 1285025 config.go:182] Loaded profile config "addons-718596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:33:54.463834 1285025 addons.go:606] checking whether the cluster is paused
	I1018 08:33:54.464012 1285025 config.go:182] Loaded profile config "addons-718596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:33:54.464029 1285025 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:33:54.464678 1285025 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:33:54.484045 1285025 ssh_runner.go:195] Run: systemctl --version
	I1018 08:33:54.484109 1285025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:33:54.505254 1285025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:33:54.606404 1285025 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 08:33:54.606487 1285025 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 08:33:54.637580 1285025 cri.go:89] found id: "f2ba69481cca4d0f5a25c5beaad63e76454385180b30477c08c5dd39ec11f960"
	I1018 08:33:54.637604 1285025 cri.go:89] found id: "fee77718765ce01862d3826830d1cd1b63df4294ae5007413d147c5b6f353493"
	I1018 08:33:54.637613 1285025 cri.go:89] found id: "2b38f5de44735a72df017a6ba528430e174db28f03821d00d635d4b6bf461eb9"
	I1018 08:33:54.637617 1285025 cri.go:89] found id: "60f3656a31e3365232ea3a50fda1e9925de06d7f546e566ed63caa3b6ddc6a26"
	I1018 08:33:54.637621 1285025 cri.go:89] found id: "69416bfe918f8c0355a6e848e7c20cc2a0644f592e4af4ce241f48fa38460e38"
	I1018 08:33:54.637624 1285025 cri.go:89] found id: "28a70a60eb5fbd7784b4dc8b229decc07e1ead52bd5c852f7c9fdb71b87298e6"
	I1018 08:33:54.637627 1285025 cri.go:89] found id: "9b6001d8d045befc23b48c768a68764d88dbf272340c086878656541cf475eae"
	I1018 08:33:54.637631 1285025 cri.go:89] found id: "20dde0b8d894a22ef9350e1d77e8944af9e5892634e205a207e3b8f888c608b1"
	I1018 08:33:54.637634 1285025 cri.go:89] found id: "1a2a1784a32edcacf12954ced37433e7d2a873d907689eb4a650272d4227e914"
	I1018 08:33:54.637640 1285025 cri.go:89] found id: "41255395d4ccf30b4f83f7286e24fddc9877529056e3b6550176417a18c6ec37"
	I1018 08:33:54.637644 1285025 cri.go:89] found id: "df452f4c1f840a0419fcac5bf2f09de27afcb9908345e644113dc8d6933a8ea1"
	I1018 08:33:54.637648 1285025 cri.go:89] found id: "b7afa4a4426cb6b1ac0da51b0491d588ab2af4410fd28370bf8fa39172e5f813"
	I1018 08:33:54.637652 1285025 cri.go:89] found id: "73bba38c93d1dd848863605a0e040b6cc93f6c2ae44da4b521ffd096fc0e7cef"
	I1018 08:33:54.637656 1285025 cri.go:89] found id: "3341cb4941ef2d1b3b62f6350adf989a06c0a11fdca10b1d2a4d0be1a73e4dac"
	I1018 08:33:54.637662 1285025 cri.go:89] found id: "af3c01fc17b1cf65bfc660209278c4a4b5768989098968a12a530d54cf2e3e99"
	I1018 08:33:54.637669 1285025 cri.go:89] found id: "21a4115e68c1dc6a33076f076e3d38d44c1bd958e69b4c3ac7531a57eb042d50"
	I1018 08:33:54.637677 1285025 cri.go:89] found id: "8a8b73b00b16ea37452d2de32d3703276b55fbacd22ec17ae9cc096aec490df7"
	I1018 08:33:54.637682 1285025 cri.go:89] found id: "c13caa4e33e4b6e47382b70365473a8c332ce93c8e5eeb30bd76b72c8a77d787"
	I1018 08:33:54.637686 1285025 cri.go:89] found id: "c8f4c76b52ea3dbdf50b9feeeef868087ebe9f8d3a8596d5c5b2c3f5fe5faad5"
	I1018 08:33:54.637688 1285025 cri.go:89] found id: "d3d6b4b5a780c831e4df944a01c79fc26b915752ab82f96bfa50a8db6eeb10ca"
	I1018 08:33:54.637694 1285025 cri.go:89] found id: "b5b1a3ea57732699826bd43ca767e85f48ee04112206faf1a4a459e8820cf3d8"
	I1018 08:33:54.637697 1285025 cri.go:89] found id: "3d8f28771c74beb97c58d5f8c30b538ff9c212922906d7c5984e69d87dfde8c8"
	I1018 08:33:54.637701 1285025 cri.go:89] found id: "c60014395decc7e3a08e95d661ec9973575293fea6d721129b4f610cd9045de2"
	I1018 08:33:54.637709 1285025 cri.go:89] found id: ""
	I1018 08:33:54.637758 1285025 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 08:33:54.652592 1285025 out.go:203] 
	W1018 08:33:54.655578 1285025 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:33:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:33:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 08:33:54.655613 1285025 out.go:285] * 
	* 
	W1018 08:33:54.664517 1285025 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 08:33:54.667472 1285025 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-718596 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.52s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (146.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-718596 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-718596 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-718596 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [1ba2b273-724e-433e-b74c-66922d20b535] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [1ba2b273-724e-433e-b74c-66922d20b535] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004155583s
I1018 08:33:30.879673 1276097 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-718596 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-718596 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.456011415s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-718596 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-718596 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-718596
helpers_test.go:243: (dbg) docker inspect addons-718596:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1da112bdf57ccc7d8da6bcbee61a9f3bab5ab8a465f139996599b3bb8d462292",
	        "Created": "2025-10-18T08:30:14.15517958Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1277258,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T08:30:14.221505486Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/1da112bdf57ccc7d8da6bcbee61a9f3bab5ab8a465f139996599b3bb8d462292/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1da112bdf57ccc7d8da6bcbee61a9f3bab5ab8a465f139996599b3bb8d462292/hostname",
	        "HostsPath": "/var/lib/docker/containers/1da112bdf57ccc7d8da6bcbee61a9f3bab5ab8a465f139996599b3bb8d462292/hosts",
	        "LogPath": "/var/lib/docker/containers/1da112bdf57ccc7d8da6bcbee61a9f3bab5ab8a465f139996599b3bb8d462292/1da112bdf57ccc7d8da6bcbee61a9f3bab5ab8a465f139996599b3bb8d462292-json.log",
	        "Name": "/addons-718596",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-718596:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-718596",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1da112bdf57ccc7d8da6bcbee61a9f3bab5ab8a465f139996599b3bb8d462292",
	                "LowerDir": "/var/lib/docker/overlay2/46018aad8cff278750f0c63dd3e2338fc02fc1faf3fc20e510086c0eb07c6cb6-init/diff:/var/lib/docker/overlay2/60519750c2737db0dfdb37adf468f6414129c2b5dcb7218ea59afbc63bfd537f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/46018aad8cff278750f0c63dd3e2338fc02fc1faf3fc20e510086c0eb07c6cb6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/46018aad8cff278750f0c63dd3e2338fc02fc1faf3fc20e510086c0eb07c6cb6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/46018aad8cff278750f0c63dd3e2338fc02fc1faf3fc20e510086c0eb07c6cb6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-718596",
	                "Source": "/var/lib/docker/volumes/addons-718596/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-718596",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-718596",
	                "name.minikube.sigs.k8s.io": "addons-718596",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1d4668ccbf9641fa9255a434a34f258857ac42e41520e21c0fa31fd9f4cf7fa7",
	            "SandboxKey": "/var/run/docker/netns/1d4668ccbf96",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34591"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34592"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34595"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34593"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34594"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-718596": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "82:68:f3:4f:66:5e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0e521cd8786e64916a2fa82c7e1b4ef4883e53245ebc0e9edab985ff6e857cb1",
	                    "EndpointID": "016be47b442e12f6291f2c4f8dc41a6222e1d42ca95fb4588f6e4964981c89b2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-718596",
	                        "1da112bdf57c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-718596 -n addons-718596
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-718596 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-718596 logs -n 25: (1.459571386s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-695796                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-695796 │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │ 18 Oct 25 08:29 UTC │
	│ start   │ --download-only -p binary-mirror-375009 --alsologtostderr --binary-mirror http://127.0.0.1:42419 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-375009   │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │                     │
	│ delete  │ -p binary-mirror-375009                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-375009   │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │ 18 Oct 25 08:29 UTC │
	│ addons  │ enable dashboard -p addons-718596                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-718596          │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │                     │
	│ addons  │ disable dashboard -p addons-718596                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-718596          │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │                     │
	│ start   │ -p addons-718596 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-718596          │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │ 18 Oct 25 08:32 UTC │
	│ addons  │ addons-718596 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-718596          │ jenkins │ v1.37.0 │ 18 Oct 25 08:32 UTC │                     │
	│ addons  │ addons-718596 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-718596          │ jenkins │ v1.37.0 │ 18 Oct 25 08:32 UTC │                     │
	│ addons  │ enable headlamp -p addons-718596 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-718596          │ jenkins │ v1.37.0 │ 18 Oct 25 08:32 UTC │                     │
	│ addons  │ addons-718596 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-718596          │ jenkins │ v1.37.0 │ 18 Oct 25 08:32 UTC │                     │
	│ ip      │ addons-718596 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-718596          │ jenkins │ v1.37.0 │ 18 Oct 25 08:33 UTC │ 18 Oct 25 08:33 UTC │
	│ addons  │ addons-718596 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-718596          │ jenkins │ v1.37.0 │ 18 Oct 25 08:33 UTC │                     │
	│ addons  │ addons-718596 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-718596          │ jenkins │ v1.37.0 │ 18 Oct 25 08:33 UTC │                     │
	│ addons  │ addons-718596 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-718596          │ jenkins │ v1.37.0 │ 18 Oct 25 08:33 UTC │                     │
	│ ssh     │ addons-718596 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-718596          │ jenkins │ v1.37.0 │ 18 Oct 25 08:33 UTC │                     │
	│ addons  │ addons-718596 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-718596          │ jenkins │ v1.37.0 │ 18 Oct 25 08:33 UTC │                     │
	│ addons  │ addons-718596 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-718596          │ jenkins │ v1.37.0 │ 18 Oct 25 08:33 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-718596                                                                                                                                                                                                                                                                                                                                                                                           │ addons-718596          │ jenkins │ v1.37.0 │ 18 Oct 25 08:33 UTC │ 18 Oct 25 08:33 UTC │
	│ addons  │ addons-718596 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-718596          │ jenkins │ v1.37.0 │ 18 Oct 25 08:33 UTC │                     │
	│ addons  │ addons-718596 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-718596          │ jenkins │ v1.37.0 │ 18 Oct 25 08:34 UTC │                     │
	│ addons  │ addons-718596 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-718596          │ jenkins │ v1.37.0 │ 18 Oct 25 08:34 UTC │                     │
	│ ssh     │ addons-718596 ssh cat /opt/local-path-provisioner/pvc-de02ca27-1646-44a3-877b-65be44ed9287_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-718596          │ jenkins │ v1.37.0 │ 18 Oct 25 08:34 UTC │ 18 Oct 25 08:34 UTC │
	│ addons  │ addons-718596 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-718596          │ jenkins │ v1.37.0 │ 18 Oct 25 08:34 UTC │                     │
	│ addons  │ addons-718596 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-718596          │ jenkins │ v1.37.0 │ 18 Oct 25 08:34 UTC │                     │
	│ ip      │ addons-718596 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-718596          │ jenkins │ v1.37.0 │ 18 Oct 25 08:35 UTC │ 18 Oct 25 08:35 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 08:29:47
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 08:29:47.349632 1276853 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:29:47.349834 1276853 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:29:47.349865 1276853 out.go:374] Setting ErrFile to fd 2...
	I1018 08:29:47.349884 1276853 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:29:47.350258 1276853 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 08:29:47.350877 1276853 out.go:368] Setting JSON to false
	I1018 08:29:47.351861 1276853 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":36735,"bootTime":1760739453,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1018 08:29:47.351930 1276853 start.go:141] virtualization:  
	I1018 08:29:47.355179 1276853 out.go:179] * [addons-718596] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 08:29:47.358924 1276853 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 08:29:47.359079 1276853 notify.go:220] Checking for updates...
	I1018 08:29:47.364713 1276853 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 08:29:47.367577 1276853 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 08:29:47.370362 1276853 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-1274243/.minikube
	I1018 08:29:47.373252 1276853 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 08:29:47.376017 1276853 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 08:29:47.378979 1276853 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 08:29:47.409945 1276853 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 08:29:47.410106 1276853 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 08:29:47.464340 1276853 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-18 08:29:47.455147592 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 08:29:47.464456 1276853 docker.go:318] overlay module found
	I1018 08:29:47.467520 1276853 out.go:179] * Using the docker driver based on user configuration
	I1018 08:29:47.470234 1276853 start.go:305] selected driver: docker
	I1018 08:29:47.470252 1276853 start.go:925] validating driver "docker" against <nil>
	I1018 08:29:47.470266 1276853 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 08:29:47.471007 1276853 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 08:29:47.523216 1276853 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-18 08:29:47.5145796 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 08:29:47.523374 1276853 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 08:29:47.523602 1276853 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 08:29:47.526516 1276853 out.go:179] * Using Docker driver with root privileges
	I1018 08:29:47.529363 1276853 cni.go:84] Creating CNI manager for ""
	I1018 08:29:47.529459 1276853 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 08:29:47.529526 1276853 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 08:29:47.529619 1276853 start.go:349] cluster config:
	{Name:addons-718596 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-718596 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1018 08:29:47.532779 1276853 out.go:179] * Starting "addons-718596" primary control-plane node in "addons-718596" cluster
	I1018 08:29:47.535572 1276853 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 08:29:47.538415 1276853 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 08:29:47.541217 1276853 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 08:29:47.541271 1276853 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 08:29:47.541298 1276853 cache.go:58] Caching tarball of preloaded images
	I1018 08:29:47.541312 1276853 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 08:29:47.541388 1276853 preload.go:233] Found /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 08:29:47.541398 1276853 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 08:29:47.541727 1276853 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/config.json ...
	I1018 08:29:47.541749 1276853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/config.json: {Name:mka4d001fbaa07ca0818af11df2d956be6ef062b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:29:47.556920 1276853 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1018 08:29:47.557036 1276853 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1018 08:29:47.557055 1276853 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory, skipping pull
	I1018 08:29:47.557061 1276853 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in cache, skipping pull
	I1018 08:29:47.557068 1276853 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1018 08:29:47.557073 1276853 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from local cache
	I1018 08:30:05.781930 1276853 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from cached tarball
	I1018 08:30:05.781971 1276853 cache.go:232] Successfully downloaded all kic artifacts
	I1018 08:30:05.782002 1276853 start.go:360] acquireMachinesLock for addons-718596: {Name:mk7bf7588de7d6bcca70e234e6145d68c8ec74e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 08:30:05.782125 1276853 start.go:364] duration metric: took 98.992µs to acquireMachinesLock for "addons-718596"
	I1018 08:30:05.782157 1276853 start.go:93] Provisioning new machine with config: &{Name:addons-718596 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-718596 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 08:30:05.782225 1276853 start.go:125] createHost starting for "" (driver="docker")
	I1018 08:30:05.785695 1276853 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1018 08:30:05.785931 1276853 start.go:159] libmachine.API.Create for "addons-718596" (driver="docker")
	I1018 08:30:05.785984 1276853 client.go:168] LocalClient.Create starting
	I1018 08:30:05.786109 1276853 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem
	I1018 08:30:06.440988 1276853 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem
	I1018 08:30:07.431950 1276853 cli_runner.go:164] Run: docker network inspect addons-718596 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 08:30:07.447582 1276853 cli_runner.go:211] docker network inspect addons-718596 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 08:30:07.447661 1276853 network_create.go:284] running [docker network inspect addons-718596] to gather additional debugging logs...
	I1018 08:30:07.447680 1276853 cli_runner.go:164] Run: docker network inspect addons-718596
	W1018 08:30:07.462764 1276853 cli_runner.go:211] docker network inspect addons-718596 returned with exit code 1
	I1018 08:30:07.462794 1276853 network_create.go:287] error running [docker network inspect addons-718596]: docker network inspect addons-718596: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-718596 not found
	I1018 08:30:07.462809 1276853 network_create.go:289] output of [docker network inspect addons-718596]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-718596 not found
	
	** /stderr **
	I1018 08:30:07.462922 1276853 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 08:30:07.479341 1276853 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001d74100}
	I1018 08:30:07.479386 1276853 network_create.go:124] attempt to create docker network addons-718596 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1018 08:30:07.479448 1276853 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-718596 addons-718596
	I1018 08:30:07.540044 1276853 network_create.go:108] docker network addons-718596 192.168.49.0/24 created
	I1018 08:30:07.540076 1276853 kic.go:121] calculated static IP "192.168.49.2" for the "addons-718596" container
	I1018 08:30:07.540161 1276853 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 08:30:07.556261 1276853 cli_runner.go:164] Run: docker volume create addons-718596 --label name.minikube.sigs.k8s.io=addons-718596 --label created_by.minikube.sigs.k8s.io=true
	I1018 08:30:07.573188 1276853 oci.go:103] Successfully created a docker volume addons-718596
	I1018 08:30:07.573279 1276853 cli_runner.go:164] Run: docker run --rm --name addons-718596-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-718596 --entrypoint /usr/bin/test -v addons-718596:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 08:30:09.680086 1276853 cli_runner.go:217] Completed: docker run --rm --name addons-718596-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-718596 --entrypoint /usr/bin/test -v addons-718596:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib: (2.106767691s)
	I1018 08:30:09.680121 1276853 oci.go:107] Successfully prepared a docker volume addons-718596
	I1018 08:30:09.680159 1276853 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 08:30:09.680177 1276853 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 08:30:09.680238 1276853 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-718596:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 08:30:14.075644 1276853 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-718596:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.395372656s)
	I1018 08:30:14.075676 1276853 kic.go:203] duration metric: took 4.395495868s to extract preloaded images to volume ...
	W1018 08:30:14.075822 1276853 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 08:30:14.075963 1276853 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 08:30:14.142486 1276853 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-718596 --name addons-718596 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-718596 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-718596 --network addons-718596 --ip 192.168.49.2 --volume addons-718596:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 08:30:14.448782 1276853 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Running}}
	I1018 08:30:14.470203 1276853 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:30:14.491407 1276853 cli_runner.go:164] Run: docker exec addons-718596 stat /var/lib/dpkg/alternatives/iptables
	I1018 08:30:14.545159 1276853 oci.go:144] the created container "addons-718596" has a running status.
	I1018 08:30:14.545185 1276853 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa...
	I1018 08:30:15.457300 1276853 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 08:30:15.479457 1276853 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:30:15.496956 1276853 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 08:30:15.496975 1276853 kic_runner.go:114] Args: [docker exec --privileged addons-718596 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 08:30:15.548493 1276853 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:30:15.567685 1276853 machine.go:93] provisionDockerMachine start ...
	I1018 08:30:15.567793 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:15.587494 1276853 main.go:141] libmachine: Using SSH client type: native
	I1018 08:30:15.588502 1276853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34591 <nil> <nil>}
	I1018 08:30:15.588519 1276853 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 08:30:15.743372 1276853 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-718596
	
	I1018 08:30:15.743397 1276853 ubuntu.go:182] provisioning hostname "addons-718596"
	I1018 08:30:15.743464 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:15.760833 1276853 main.go:141] libmachine: Using SSH client type: native
	I1018 08:30:15.761156 1276853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34591 <nil> <nil>}
	I1018 08:30:15.761174 1276853 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-718596 && echo "addons-718596" | sudo tee /etc/hostname
	I1018 08:30:15.921056 1276853 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-718596
	
	I1018 08:30:15.921149 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:15.938832 1276853 main.go:141] libmachine: Using SSH client type: native
	I1018 08:30:15.939147 1276853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34591 <nil> <nil>}
	I1018 08:30:15.939168 1276853 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-718596' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-718596/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-718596' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 08:30:16.088965 1276853 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 08:30:16.088992 1276853 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-1274243/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-1274243/.minikube}
	I1018 08:30:16.089021 1276853 ubuntu.go:190] setting up certificates
	I1018 08:30:16.089032 1276853 provision.go:84] configureAuth start
	I1018 08:30:16.089092 1276853 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-718596
	I1018 08:30:16.105242 1276853 provision.go:143] copyHostCerts
	I1018 08:30:16.105328 1276853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem (1078 bytes)
	I1018 08:30:16.105466 1276853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem (1123 bytes)
	I1018 08:30:16.105535 1276853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem (1675 bytes)
	I1018 08:30:16.105590 1276853 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem org=jenkins.addons-718596 san=[127.0.0.1 192.168.49.2 addons-718596 localhost minikube]
	I1018 08:30:16.577032 1276853 provision.go:177] copyRemoteCerts
	I1018 08:30:16.577097 1276853 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 08:30:16.577137 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:16.602794 1276853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:30:16.703417 1276853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 08:30:16.720870 1276853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 08:30:16.738969 1276853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I1018 08:30:16.755944 1276853 provision.go:87] duration metric: took 666.898547ms to configureAuth
	I1018 08:30:16.756006 1276853 ubuntu.go:206] setting minikube options for container-runtime
	I1018 08:30:16.756191 1276853 config.go:182] Loaded profile config "addons-718596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:30:16.756298 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:16.777012 1276853 main.go:141] libmachine: Using SSH client type: native
	I1018 08:30:16.777340 1276853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34591 <nil> <nil>}
	I1018 08:30:16.777361 1276853 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 08:30:17.034090 1276853 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 08:30:17.034178 1276853 machine.go:96] duration metric: took 1.466472783s to provisionDockerMachine
	I1018 08:30:17.034203 1276853 client.go:171] duration metric: took 11.248209061s to LocalClient.Create
	I1018 08:30:17.034249 1276853 start.go:167] duration metric: took 11.248319621s to libmachine.API.Create "addons-718596"
	I1018 08:30:17.034276 1276853 start.go:293] postStartSetup for "addons-718596" (driver="docker")
	I1018 08:30:17.034300 1276853 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 08:30:17.034396 1276853 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 08:30:17.034509 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:17.053760 1276853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:30:17.160038 1276853 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 08:30:17.163354 1276853 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 08:30:17.163390 1276853 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 08:30:17.163401 1276853 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-1274243/.minikube/addons for local assets ...
	I1018 08:30:17.163519 1276853 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-1274243/.minikube/files for local assets ...
	I1018 08:30:17.163560 1276853 start.go:296] duration metric: took 129.255685ms for postStartSetup
	I1018 08:30:17.163952 1276853 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-718596
	I1018 08:30:17.181416 1276853 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/config.json ...
	I1018 08:30:17.181707 1276853 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 08:30:17.181766 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:17.199084 1276853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:30:17.304718 1276853 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 08:30:17.309079 1276853 start.go:128] duration metric: took 11.526838396s to createHost
	I1018 08:30:17.309164 1276853 start.go:83] releasing machines lock for "addons-718596", held for 11.527025517s
	I1018 08:30:17.309267 1276853 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-718596
	I1018 08:30:17.325293 1276853 ssh_runner.go:195] Run: cat /version.json
	I1018 08:30:17.325347 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:17.325367 1276853 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 08:30:17.325427 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:17.342916 1276853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:30:17.345295 1276853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:30:17.541415 1276853 ssh_runner.go:195] Run: systemctl --version
	I1018 08:30:17.547535 1276853 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 08:30:17.582099 1276853 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 08:30:17.586301 1276853 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 08:30:17.586368 1276853 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 08:30:17.612596 1276853 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 08:30:17.612616 1276853 start.go:495] detecting cgroup driver to use...
	I1018 08:30:17.612646 1276853 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 08:30:17.612704 1276853 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 08:30:17.630196 1276853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 08:30:17.642870 1276853 docker.go:218] disabling cri-docker service (if available) ...
	I1018 08:30:17.642986 1276853 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 08:30:17.660286 1276853 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 08:30:17.678225 1276853 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 08:30:17.795587 1276853 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 08:30:17.923259 1276853 docker.go:234] disabling docker service ...
	I1018 08:30:17.923344 1276853 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 08:30:17.944527 1276853 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 08:30:17.957667 1276853 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 08:30:18.076889 1276853 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 08:30:18.187472 1276853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 08:30:18.200438 1276853 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 08:30:18.215224 1276853 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 08:30:18.215293 1276853 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:30:18.225000 1276853 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 08:30:18.225073 1276853 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:30:18.233892 1276853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:30:18.242514 1276853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:30:18.251025 1276853 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 08:30:18.259261 1276853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:30:18.268057 1276853 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:30:18.281362 1276853 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:30:18.290579 1276853 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 08:30:18.298273 1276853 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 08:30:18.305698 1276853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 08:30:18.411952 1276853 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 08:30:18.535124 1276853 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 08:30:18.535279 1276853 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 08:30:18.539298 1276853 start.go:563] Will wait 60s for crictl version
	I1018 08:30:18.539411 1276853 ssh_runner.go:195] Run: which crictl
	I1018 08:30:18.542797 1276853 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 08:30:18.565782 1276853 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 08:30:18.565947 1276853 ssh_runner.go:195] Run: crio --version
	I1018 08:30:18.595054 1276853 ssh_runner.go:195] Run: crio --version
	I1018 08:30:18.626900 1276853 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 08:30:18.629585 1276853 cli_runner.go:164] Run: docker network inspect addons-718596 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 08:30:18.645359 1276853 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 08:30:18.648782 1276853 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 08:30:18.657865 1276853 kubeadm.go:883] updating cluster {Name:addons-718596 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-718596 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 08:30:18.657989 1276853 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 08:30:18.658054 1276853 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 08:30:18.688834 1276853 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 08:30:18.688857 1276853 crio.go:433] Images already preloaded, skipping extraction
	I1018 08:30:18.688912 1276853 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 08:30:18.714008 1276853 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 08:30:18.714031 1276853 cache_images.go:85] Images are preloaded, skipping loading
	I1018 08:30:18.714039 1276853 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1018 08:30:18.714127 1276853 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-718596 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-718596 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 08:30:18.714212 1276853 ssh_runner.go:195] Run: crio config
	I1018 08:30:18.767246 1276853 cni.go:84] Creating CNI manager for ""
	I1018 08:30:18.767269 1276853 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 08:30:18.767288 1276853 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 08:30:18.767319 1276853 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-718596 NodeName:addons-718596 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 08:30:18.767469 1276853 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-718596"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 08:30:18.767552 1276853 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 08:30:18.775296 1276853 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 08:30:18.775366 1276853 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 08:30:18.782828 1276853 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1018 08:30:18.794964 1276853 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 08:30:18.807932 1276853 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1018 08:30:18.820556 1276853 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1018 08:30:18.824035 1276853 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 08:30:18.833814 1276853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 08:30:18.950358 1276853 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 08:30:18.965412 1276853 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596 for IP: 192.168.49.2
	I1018 08:30:18.965443 1276853 certs.go:195] generating shared ca certs ...
	I1018 08:30:18.965460 1276853 certs.go:227] acquiring lock for ca certs: {Name:mk38b7543cc5a8f0209b20a590824c94ad8d0609 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:30:18.965606 1276853 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.key
	I1018 08:30:19.539532 1276853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.crt ...
	I1018 08:30:19.539564 1276853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.crt: {Name:mk14aac60bd0c5732eec7cb3257c85d7c2ed1b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:30:19.539790 1276853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.key ...
	I1018 08:30:19.539806 1276853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.key: {Name:mkdeaaf9a4bd1141ccaf9c64e8f433b86d74556c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:30:19.539918 1276853 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.key
	I1018 08:30:19.931522 1276853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.crt ...
	I1018 08:30:19.931553 1276853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.crt: {Name:mka1c450e0a44ea2f01dd153e2e4b5997f1b2b4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:30:19.931745 1276853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.key ...
	I1018 08:30:19.931758 1276853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.key: {Name:mkc0f689cdc574ec5d286e831e608d80527bb985 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:30:19.931860 1276853 certs.go:257] generating profile certs ...
	I1018 08:30:19.931924 1276853 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/client.key
	I1018 08:30:19.931943 1276853 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/client.crt with IP's: []
	I1018 08:30:20.645020 1276853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/client.crt ...
	I1018 08:30:20.645052 1276853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/client.crt: {Name:mk5a3f63526334e3704b03a78c81701653479538 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:30:20.645240 1276853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/client.key ...
	I1018 08:30:20.645256 1276853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/client.key: {Name:mk6690ee8a90850afad155899a6abb40f48b949a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:30:20.645331 1276853 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/apiserver.key.5ba3ca1a
	I1018 08:30:20.645347 1276853 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/apiserver.crt.5ba3ca1a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1018 08:30:21.104996 1276853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/apiserver.crt.5ba3ca1a ...
	I1018 08:30:21.105026 1276853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/apiserver.crt.5ba3ca1a: {Name:mkd5d34969ee8b2bf2fa41e0d6fba7d1be0451b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:30:21.105211 1276853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/apiserver.key.5ba3ca1a ...
	I1018 08:30:21.105225 1276853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/apiserver.key.5ba3ca1a: {Name:mk42c5900156e6c5c1e92c4f35880e214e106590 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:30:21.105320 1276853 certs.go:382] copying /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/apiserver.crt.5ba3ca1a -> /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/apiserver.crt
	I1018 08:30:21.105402 1276853 certs.go:386] copying /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/apiserver.key.5ba3ca1a -> /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/apiserver.key
	I1018 08:30:21.105458 1276853 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/proxy-client.key
	I1018 08:30:21.105483 1276853 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/proxy-client.crt with IP's: []
	I1018 08:30:21.442761 1276853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/proxy-client.crt ...
	I1018 08:30:21.442790 1276853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/proxy-client.crt: {Name:mk3c36423478c35b23a45d0a41a38444911aac0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:30:21.442982 1276853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/proxy-client.key ...
	I1018 08:30:21.442996 1276853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/proxy-client.key: {Name:mk5764ba6408574aaecdb08ee06e0b6dddc0d0ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:30:21.443192 1276853 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 08:30:21.443234 1276853 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem (1078 bytes)
	I1018 08:30:21.443263 1276853 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem (1123 bytes)
	I1018 08:30:21.443301 1276853 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem (1675 bytes)
	I1018 08:30:21.443887 1276853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 08:30:21.461741 1276853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 08:30:21.479455 1276853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 08:30:21.496903 1276853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 08:30:21.514024 1276853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1018 08:30:21.531077 1276853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 08:30:21.547278 1276853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 08:30:21.564183 1276853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 08:30:21.580431 1276853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 08:30:21.596963 1276853 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 08:30:21.608765 1276853 ssh_runner.go:195] Run: openssl version
	I1018 08:30:21.614821 1276853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 08:30:21.622726 1276853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 08:30:21.626039 1276853 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1018 08:30:21.626095 1276853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 08:30:21.666701 1276853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 08:30:21.674713 1276853 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 08:30:21.677873 1276853 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 08:30:21.677914 1276853 kubeadm.go:400] StartCluster: {Name:addons-718596 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-718596 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 08:30:21.677985 1276853 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 08:30:21.678036 1276853 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 08:30:21.705348 1276853 cri.go:89] found id: ""
	I1018 08:30:21.705488 1276853 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 08:30:21.713820 1276853 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 08:30:21.722113 1276853 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 08:30:21.722175 1276853 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 08:30:21.729614 1276853 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 08:30:21.729638 1276853 kubeadm.go:157] found existing configuration files:
	
	I1018 08:30:21.729706 1276853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 08:30:21.737257 1276853 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 08:30:21.737328 1276853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 08:30:21.744507 1276853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 08:30:21.752022 1276853 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 08:30:21.752126 1276853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 08:30:21.759528 1276853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 08:30:21.766705 1276853 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 08:30:21.766767 1276853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 08:30:21.773677 1276853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 08:30:21.781136 1276853 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 08:30:21.781255 1276853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 08:30:21.788577 1276853 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 08:30:21.824222 1276853 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 08:30:21.824288 1276853 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 08:30:21.852327 1276853 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 08:30:21.852443 1276853 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 08:30:21.852512 1276853 kubeadm.go:318] OS: Linux
	I1018 08:30:21.852591 1276853 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 08:30:21.852693 1276853 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1018 08:30:21.852775 1276853 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 08:30:21.852853 1276853 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 08:30:21.852930 1276853 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 08:30:21.853032 1276853 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 08:30:21.853108 1276853 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 08:30:21.853201 1276853 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 08:30:21.853262 1276853 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1018 08:30:21.922996 1276853 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 08:30:21.923167 1276853 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 08:30:21.923285 1276853 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 08:30:21.936272 1276853 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 08:30:21.943234 1276853 out.go:252]   - Generating certificates and keys ...
	I1018 08:30:21.943349 1276853 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 08:30:21.943435 1276853 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 08:30:23.423582 1276853 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 08:30:23.767009 1276853 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 08:30:23.965278 1276853 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 08:30:24.459155 1276853 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 08:30:24.978008 1276853 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 08:30:24.978351 1276853 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-718596 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1018 08:30:25.645134 1276853 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 08:30:25.645413 1276853 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-718596 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1018 08:30:25.723134 1276853 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 08:30:26.433378 1276853 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 08:30:26.957625 1276853 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 08:30:26.957880 1276853 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 08:30:28.379666 1276853 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 08:30:28.513923 1276853 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 08:30:28.997950 1276853 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 08:30:30.268489 1276853 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 08:30:30.519851 1276853 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 08:30:30.520397 1276853 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 08:30:30.523071 1276853 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 08:30:30.526411 1276853 out.go:252]   - Booting up control plane ...
	I1018 08:30:30.526519 1276853 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 08:30:30.526603 1276853 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 08:30:30.526674 1276853 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 08:30:30.542280 1276853 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 08:30:30.542625 1276853 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 08:30:30.550699 1276853 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 08:30:30.551084 1276853 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 08:30:30.551370 1276853 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 08:30:30.678840 1276853 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 08:30:30.678969 1276853 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 08:30:31.684209 1276853 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.004700026s
	I1018 08:30:31.693548 1276853 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 08:30:31.693730 1276853 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1018 08:30:31.693827 1276853 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 08:30:31.694155 1276853 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 08:30:34.564594 1276853 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.870042289s
	I1018 08:30:36.317849 1276853 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.623470697s
	I1018 08:30:38.195349 1276853 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.501289314s
	I1018 08:30:38.218682 1276853 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 08:30:38.234580 1276853 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 08:30:38.253246 1276853 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 08:30:38.253573 1276853 kubeadm.go:318] [mark-control-plane] Marking the node addons-718596 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 08:30:38.270595 1276853 kubeadm.go:318] [bootstrap-token] Using token: einyuv.xotqzq233w49k3mh
	I1018 08:30:38.273708 1276853 out.go:252]   - Configuring RBAC rules ...
	I1018 08:30:38.273853 1276853 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 08:30:38.277976 1276853 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 08:30:38.286441 1276853 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 08:30:38.291243 1276853 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 08:30:38.295452 1276853 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 08:30:38.301441 1276853 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 08:30:38.603962 1276853 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 08:30:39.038580 1276853 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 08:30:39.604157 1276853 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 08:30:39.605583 1276853 kubeadm.go:318] 
	I1018 08:30:39.605655 1276853 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 08:30:39.605660 1276853 kubeadm.go:318] 
	I1018 08:30:39.605736 1276853 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 08:30:39.605741 1276853 kubeadm.go:318] 
	I1018 08:30:39.605767 1276853 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 08:30:39.605826 1276853 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 08:30:39.605875 1276853 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 08:30:39.605880 1276853 kubeadm.go:318] 
	I1018 08:30:39.605934 1276853 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 08:30:39.605939 1276853 kubeadm.go:318] 
	I1018 08:30:39.605986 1276853 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 08:30:39.605991 1276853 kubeadm.go:318] 
	I1018 08:30:39.606042 1276853 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 08:30:39.606117 1276853 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 08:30:39.606184 1276853 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 08:30:39.606188 1276853 kubeadm.go:318] 
	I1018 08:30:39.606271 1276853 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 08:30:39.606347 1276853 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 08:30:39.606352 1276853 kubeadm.go:318] 
	I1018 08:30:39.606441 1276853 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token einyuv.xotqzq233w49k3mh \
	I1018 08:30:39.606544 1276853 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:96f90b4cb8c73febdc9c05bdeac126d36ffd51807995d3aad044a9ec3e33b953 \
	I1018 08:30:39.606564 1276853 kubeadm.go:318] 	--control-plane 
	I1018 08:30:39.606568 1276853 kubeadm.go:318] 
	I1018 08:30:39.606652 1276853 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 08:30:39.606656 1276853 kubeadm.go:318] 
	I1018 08:30:39.606737 1276853 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token einyuv.xotqzq233w49k3mh \
	I1018 08:30:39.606838 1276853 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:96f90b4cb8c73febdc9c05bdeac126d36ffd51807995d3aad044a9ec3e33b953 
	I1018 08:30:39.609192 1276853 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1018 08:30:39.609431 1276853 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1018 08:30:39.609540 1276853 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 08:30:39.609555 1276853 cni.go:84] Creating CNI manager for ""
	I1018 08:30:39.609563 1276853 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 08:30:39.612752 1276853 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 08:30:39.615561 1276853 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 08:30:39.619691 1276853 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 08:30:39.619758 1276853 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 08:30:39.631978 1276853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 08:30:39.949511 1276853 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 08:30:39.949654 1276853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:39.949733 1276853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-718596 minikube.k8s.io/updated_at=2025_10_18T08_30_39_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820 minikube.k8s.io/name=addons-718596 minikube.k8s.io/primary=true
	I1018 08:30:40.156919 1276853 ops.go:34] apiserver oom_adj: -16
	I1018 08:30:40.157031 1276853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:40.657834 1276853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:41.157185 1276853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:41.657876 1276853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:42.158041 1276853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:42.657168 1276853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:43.157355 1276853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:43.657323 1276853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:44.157673 1276853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:44.657104 1276853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:44.739668 1276853 kubeadm.go:1113] duration metric: took 4.790066572s to wait for elevateKubeSystemPrivileges
	I1018 08:30:44.739698 1276853 kubeadm.go:402] duration metric: took 23.061787025s to StartCluster
	I1018 08:30:44.739716 1276853 settings.go:142] acquiring lock: {Name:mk4240955247baece9b9f2d9a5a8e189cb089184 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:30:44.739828 1276853 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 08:30:44.740241 1276853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/kubeconfig: {Name:mk4be33efdd3c492a070116d2c0b6f8b234870aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:30:44.740438 1276853 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 08:30:44.740576 1276853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 08:30:44.740815 1276853 config.go:182] Loaded profile config "addons-718596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:30:44.740850 1276853 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1018 08:30:44.740930 1276853 addons.go:69] Setting yakd=true in profile "addons-718596"
	I1018 08:30:44.740942 1276853 addons.go:69] Setting inspektor-gadget=true in profile "addons-718596"
	I1018 08:30:44.740951 1276853 addons.go:69] Setting metrics-server=true in profile "addons-718596"
	I1018 08:30:44.740962 1276853 addons.go:238] Setting addon metrics-server=true in "addons-718596"
	I1018 08:30:44.740963 1276853 addons.go:238] Setting addon inspektor-gadget=true in "addons-718596"
	I1018 08:30:44.740983 1276853 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:30:44.740993 1276853 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:30:44.741440 1276853 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:30:44.741446 1276853 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-718596"
	I1018 08:30:44.741457 1276853 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-718596"
	I1018 08:30:44.741472 1276853 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:30:44.741848 1276853 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:30:44.742099 1276853 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-718596"
	I1018 08:30:44.742118 1276853 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-718596"
	I1018 08:30:44.742140 1276853 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:30:44.742531 1276853 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:30:44.745431 1276853 addons.go:69] Setting cloud-spanner=true in profile "addons-718596"
	I1018 08:30:44.745564 1276853 addons.go:238] Setting addon cloud-spanner=true in "addons-718596"
	I1018 08:30:44.745626 1276853 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:30:44.746136 1276853 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:30:44.748744 1276853 addons.go:69] Setting registry=true in profile "addons-718596"
	I1018 08:30:44.749370 1276853 addons.go:238] Setting addon registry=true in "addons-718596"
	I1018 08:30:44.749445 1276853 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:30:44.741440 1276853 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:30:44.748904 1276853 addons.go:69] Setting registry-creds=true in profile "addons-718596"
	I1018 08:30:44.740947 1276853 addons.go:238] Setting addon yakd=true in "addons-718596"
	I1018 08:30:44.748914 1276853 addons.go:69] Setting storage-provisioner=true in profile "addons-718596"
	I1018 08:30:44.748918 1276853 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-718596"
	I1018 08:30:44.748931 1276853 addons.go:69] Setting volcano=true in profile "addons-718596"
	I1018 08:30:44.748935 1276853 addons.go:69] Setting volumesnapshots=true in profile "addons-718596"
	I1018 08:30:44.749278 1276853 out.go:179] * Verifying Kubernetes components...
	I1018 08:30:44.749720 1276853 addons.go:69] Setting gcp-auth=true in profile "addons-718596"
	I1018 08:30:44.749747 1276853 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-718596"
	I1018 08:30:44.749751 1276853 addons.go:69] Setting default-storageclass=true in profile "addons-718596"
	I1018 08:30:44.749756 1276853 addons.go:69] Setting ingress=true in profile "addons-718596"
	I1018 08:30:44.749759 1276853 addons.go:69] Setting ingress-dns=true in profile "addons-718596"
	I1018 08:30:44.752171 1276853 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:30:44.766961 1276853 addons.go:238] Setting addon registry-creds=true in "addons-718596"
	I1018 08:30:44.767059 1276853 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:30:44.767551 1276853 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:30:44.767629 1276853 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:30:44.768160 1276853 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:30:44.774429 1276853 mustload.go:65] Loading cluster: addons-718596
	I1018 08:30:44.774703 1276853 config.go:182] Loaded profile config "addons-718596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:30:44.774987 1276853 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:30:44.782679 1276853 addons.go:238] Setting addon storage-provisioner=true in "addons-718596"
	I1018 08:30:44.782739 1276853 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:30:44.783223 1276853 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:30:44.793187 1276853 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-718596"
	I1018 08:30:44.793278 1276853 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:30:44.793782 1276853 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:30:44.803586 1276853 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-718596"
	I1018 08:30:44.804027 1276853 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:30:44.808154 1276853 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-718596"
	I1018 08:30:44.808567 1276853 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:30:44.821671 1276853 addons.go:238] Setting addon volcano=true in "addons-718596"
	I1018 08:30:44.821722 1276853 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:30:44.822197 1276853 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:30:44.825596 1276853 addons.go:238] Setting addon ingress=true in "addons-718596"
	I1018 08:30:44.825650 1276853 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:30:44.826120 1276853 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:30:44.840193 1276853 addons.go:238] Setting addon volumesnapshots=true in "addons-718596"
	I1018 08:30:44.840244 1276853 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:30:44.842266 1276853 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:30:44.843617 1276853 addons.go:238] Setting addon ingress-dns=true in "addons-718596"
	I1018 08:30:44.843669 1276853 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:30:44.844301 1276853 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:30:44.904479 1276853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 08:30:44.908748 1276853 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1018 08:30:44.914957 1276853 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1018 08:30:44.914984 1276853 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1018 08:30:44.915056 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:44.958719 1276853 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1018 08:30:44.960010 1276853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 08:30:44.960119 1276853 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1018 08:30:45.008710 1276853 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 08:30:45.008797 1276853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1018 08:30:45.008920 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:45.023218 1276853 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 08:30:45.023240 1276853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1018 08:30:45.023313 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:45.072007 1276853 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1018 08:30:45.075075 1276853 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1018 08:30:45.075109 1276853 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1018 08:30:45.075188 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:45.076700 1276853 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1018 08:30:45.081519 1276853 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1018 08:30:45.081550 1276853 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1018 08:30:45.081648 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:45.087879 1276853 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 08:30:45.092230 1276853 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 08:30:45.092265 1276853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 08:30:45.092354 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:44.960501 1276853 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1018 08:30:45.099480 1276853 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1018 08:30:45.099510 1276853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1018 08:30:45.099594 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:45.112834 1276853 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1018 08:30:45.119976 1276853 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 08:30:45.120011 1276853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1018 08:30:45.120104 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:45.126253 1276853 addons.go:238] Setting addon default-storageclass=true in "addons-718596"
	I1018 08:30:45.126311 1276853 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:30:45.126380 1276853 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1018 08:30:45.130066 1276853 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-718596"
	I1018 08:30:45.130117 1276853 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:30:45.130580 1276853 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:30:45.139933 1276853 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:30:45.142054 1276853 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:30:45.207518 1276853 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 08:30:45.207610 1276853 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1018 08:30:45.211995 1276853 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1018 08:30:45.212132 1276853 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1018 08:30:45.212209 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:45.212730 1276853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	W1018 08:30:45.216749 1276853 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1018 08:30:45.227210 1276853 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1018 08:30:45.232500 1276853 out.go:179]   - Using image docker.io/registry:3.0.0
	I1018 08:30:45.235827 1276853 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1018 08:30:45.235889 1276853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1018 08:30:45.235970 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:45.239015 1276853 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1018 08:30:45.241192 1276853 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 08:30:45.242049 1276853 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 08:30:45.242083 1276853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1018 08:30:45.242161 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:45.281551 1276853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:30:45.245053 1276853 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 08:30:45.282811 1276853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1018 08:30:45.282914 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:45.314441 1276853 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1018 08:30:45.318636 1276853 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1018 08:30:45.321496 1276853 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1018 08:30:45.328128 1276853 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1018 08:30:45.331105 1276853 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1018 08:30:45.336072 1276853 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1018 08:30:45.339026 1276853 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1018 08:30:45.343624 1276853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:30:45.349040 1276853 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1018 08:30:45.352119 1276853 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1018 08:30:45.352148 1276853 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1018 08:30:45.352232 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:45.427970 1276853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:30:45.451433 1276853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:30:45.458766 1276853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:30:45.468671 1276853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:30:45.487331 1276853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:30:45.510786 1276853 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 08:30:45.510807 1276853 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 08:30:45.510868 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:45.523342 1276853 out.go:179]   - Using image docker.io/busybox:stable
	I1018 08:30:45.532078 1276853 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1018 08:30:45.538886 1276853 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 08:30:45.538911 1276853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1018 08:30:45.538991 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:45.549372 1276853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:30:45.564187 1276853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:30:45.571226 1276853 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 08:30:45.573247 1276853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:30:45.576013 1276853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	W1018 08:30:45.582808 1276853 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1018 08:30:45.582844 1276853 retry.go:31] will retry after 135.481831ms: ssh: handshake failed: EOF
	W1018 08:30:45.583061 1276853 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1018 08:30:45.583074 1276853 retry.go:31] will retry after 249.090875ms: ssh: handshake failed: EOF
	I1018 08:30:45.584076 1276853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	W1018 08:30:45.588833 1276853 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1018 08:30:45.588858 1276853 retry.go:31] will retry after 204.502929ms: ssh: handshake failed: EOF
	I1018 08:30:45.618613 1276853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	W1018 08:30:45.620120 1276853 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1018 08:30:45.620143 1276853 retry.go:31] will retry after 277.161385ms: ssh: handshake failed: EOF
	I1018 08:30:45.628156 1276853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:30:45.989764 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 08:30:46.089627 1276853 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1018 08:30:46.089654 1276853 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1018 08:30:46.118691 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1018 08:30:46.121525 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 08:30:46.144776 1276853 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1018 08:30:46.144849 1276853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1018 08:30:46.246226 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 08:30:46.317698 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 08:30:46.335605 1276853 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1018 08:30:46.335680 1276853 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1018 08:30:46.372637 1276853 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1018 08:30:46.372722 1276853 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1018 08:30:46.393155 1276853 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1018 08:30:46.393238 1276853 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1018 08:30:46.453708 1276853 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1018 08:30:46.453788 1276853 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1018 08:30:46.456620 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 08:30:46.476487 1276853 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1018 08:30:46.476561 1276853 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1018 08:30:46.480652 1276853 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:46.480725 1276853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1018 08:30:46.503136 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 08:30:46.546939 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 08:30:46.550273 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 08:30:46.552202 1276853 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 08:30:46.552269 1276853 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1018 08:30:46.601816 1276853 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1018 08:30:46.601887 1276853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1018 08:30:46.605127 1276853 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1018 08:30:46.605201 1276853 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1018 08:30:46.606809 1276853 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1018 08:30:46.606874 1276853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1018 08:30:46.609023 1276853 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1018 08:30:46.609087 1276853 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1018 08:30:46.610968 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:46.731309 1276853 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1018 08:30:46.731387 1276853 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1018 08:30:46.749196 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 08:30:46.759688 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1018 08:30:46.771815 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1018 08:30:46.785111 1276853 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1018 08:30:46.785199 1276853 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1018 08:30:46.947682 1276853 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1018 08:30:46.947707 1276853 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1018 08:30:46.961750 1276853 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.00170476s)
	I1018 08:30:46.961780 1276853 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1018 08:30:46.962735 1276853 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.391485706s)
	I1018 08:30:46.963369 1276853 node_ready.go:35] waiting up to 6m0s for node "addons-718596" to be "Ready" ...
	I1018 08:30:47.027772 1276853 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1018 08:30:47.027805 1276853 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1018 08:30:47.111662 1276853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.121822736s)
	I1018 08:30:47.193446 1276853 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1018 08:30:47.193473 1276853 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1018 08:30:47.394875 1276853 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1018 08:30:47.394909 1276853 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1018 08:30:47.445958 1276853 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1018 08:30:47.445980 1276853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1018 08:30:47.465908 1276853 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-718596" context rescaled to 1 replicas
	I1018 08:30:47.656095 1276853 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 08:30:47.656120 1276853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1018 08:30:47.710527 1276853 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1018 08:30:47.710549 1276853 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1018 08:30:47.848567 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 08:30:47.852206 1276853 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1018 08:30:47.852226 1276853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1018 08:30:48.003413 1276853 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1018 08:30:48.003520 1276853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1018 08:30:48.027641 1276853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.908908126s)
	I1018 08:30:48.027830 1276853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.906236022s)
	I1018 08:30:48.027950 1276853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.781643415s)
	I1018 08:30:48.028029 1276853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.710258815s)
	I1018 08:30:48.159908 1276853 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 08:30:48.159990 1276853 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1018 08:30:48.297155 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1018 08:30:48.979835 1276853 node_ready.go:57] node "addons-718596" has "Ready":"False" status (will retry)
	I1018 08:30:49.525050 1276853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.068348068s)
	I1018 08:30:49.592851 1276853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.089639931s)
	I1018 08:30:50.235077 1276853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.688046818s)
	W1018 08:30:50.980315 1276853 node_ready.go:57] node "addons-718596" has "Ready":"False" status (will retry)
	I1018 08:30:51.018769 1276853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.468408923s)
	I1018 08:30:51.018821 1276853 addons.go:479] Verifying addon ingress=true in "addons-718596"
	I1018 08:30:51.019160 1276853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.408128824s)
	W1018 08:30:51.019192 1276853 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:51.019209 1276853 retry.go:31] will retry after 163.875129ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:51.019315 1276853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.270043394s)
	I1018 08:30:51.019333 1276853 addons.go:479] Verifying addon metrics-server=true in "addons-718596"
	I1018 08:30:51.019393 1276853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.259632277s)
	I1018 08:30:51.019474 1276853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.24757389s)
	I1018 08:30:51.019489 1276853 addons.go:479] Verifying addon registry=true in "addons-718596"
	I1018 08:30:51.019808 1276853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.171208669s)
	W1018 08:30:51.019886 1276853 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 08:30:51.019906 1276853 retry.go:31] will retry after 128.194154ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 08:30:51.022542 1276853 out.go:179] * Verifying registry addon...
	I1018 08:30:51.022652 1276853 out.go:179] * Verifying ingress addon...
	I1018 08:30:51.022676 1276853 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-718596 service yakd-dashboard -n yakd-dashboard
	
	I1018 08:30:51.027144 1276853 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1018 08:30:51.027998 1276853 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1018 08:30:51.031520 1276853 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 08:30:51.031546 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:51.032099 1276853 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1018 08:30:51.032115 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:51.148280 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 08:30:51.184129 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:51.460075 1276853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.162872432s)
	I1018 08:30:51.460161 1276853 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-718596"
	I1018 08:30:51.464546 1276853 out.go:179] * Verifying csi-hostpath-driver addon...
	I1018 08:30:51.468246 1276853 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1018 08:30:51.496140 1276853 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 08:30:51.496162 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:51.605338 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:51.605771 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:51.975737 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:52.077142 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:52.077574 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:52.472449 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:52.531101 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:52.531253 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:52.750342 1276853 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1018 08:30:52.750425 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:52.766446 1276853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:30:52.872507 1276853 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1018 08:30:52.884843 1276853 addons.go:238] Setting addon gcp-auth=true in "addons-718596"
	I1018 08:30:52.884891 1276853 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:30:52.885336 1276853 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:30:52.903387 1276853 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1018 08:30:52.903442 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:52.928016 1276853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:30:52.971517 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:53.031940 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:53.032304 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:30:53.466158 1276853 node_ready.go:57] node "addons-718596" has "Ready":"False" status (will retry)
	I1018 08:30:53.471873 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:53.530905 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:53.531121 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:53.977078 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:53.978488 1276853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.83016758s)
	I1018 08:30:53.978600 1276853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.794439994s)
	W1018 08:30:53.978661 1276853 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:53.978684 1276853 retry.go:31] will retry after 556.924767ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:53.978617 1276853 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.075207052s)
	I1018 08:30:53.982094 1276853 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1018 08:30:53.984940 1276853 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 08:30:53.987783 1276853 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1018 08:30:53.987807 1276853 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1018 08:30:54.000739 1276853 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1018 08:30:54.000815 1276853 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1018 08:30:54.017489 1276853 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 08:30:54.017515 1276853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1018 08:30:54.033455 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 08:30:54.034043 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:54.034248 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:54.484291 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:54.524239 1276853 addons.go:479] Verifying addon gcp-auth=true in "addons-718596"
	I1018 08:30:54.527417 1276853 out.go:179] * Verifying gcp-auth addon...
	I1018 08:30:54.530714 1276853 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1018 08:30:54.536127 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:54.581932 1276853 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1018 08:30:54.581960 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:54.582224 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:54.582294 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:54.971431 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:55.035170 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:55.036731 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:55.037822 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 08:30:55.372876 1276853 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:55.372906 1276853 retry.go:31] will retry after 367.26232ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1018 08:30:55.466835 1276853 node_ready.go:57] node "addons-718596" has "Ready":"False" status (will retry)
	I1018 08:30:55.471397 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:55.530291 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:55.531646 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:55.533806 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:55.741198 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:55.971665 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:56.030610 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:56.032135 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:56.036804 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:56.471856 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:30:56.536381 1276853 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:56.536412 1276853 retry.go:31] will retry after 974.417439ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:56.537397 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:56.537694 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:56.538063 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:56.971684 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:57.032303 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:57.033006 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:57.034810 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 08:30:57.468492 1276853 node_ready.go:57] node "addons-718596" has "Ready":"False" status (will retry)
	I1018 08:30:57.470747 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:57.510962 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:57.531479 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:57.533698 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:57.534584 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:57.971358 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:58.033317 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:58.033329 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:58.035070 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 08:30:58.313662 1276853 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:58.313698 1276853 retry.go:31] will retry after 883.018678ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:58.471462 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:58.530137 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:58.531451 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:58.533149 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:58.970822 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:59.030956 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:59.031372 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:59.033299 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:59.197530 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:59.472460 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:59.530742 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:59.533405 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:59.534804 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 08:30:59.970558 1276853 node_ready.go:57] node "addons-718596" has "Ready":"False" status (will retry)
	I1018 08:30:59.974804 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:31:00.039364 1276853 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:31:00.039462 1276853 retry.go:31] will retry after 1.398897827s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:31:00.045554 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:00.046766 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:00.048482 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:00.471517 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:00.531461 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:00.532274 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:00.533359 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:00.970921 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:01.030906 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:01.031601 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:01.033375 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:01.438611 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:31:01.475066 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:01.531596 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:01.533200 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:01.534487 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:01.973531 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:02.035527 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:02.035890 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:02.037321 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:31:02.252803 1276853 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:31:02.252832 1276853 retry.go:31] will retry after 1.826604097s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1018 08:31:02.466865 1276853 node_ready.go:57] node "addons-718596" has "Ready":"False" status (will retry)
	I1018 08:31:02.471665 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:02.530646 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:02.531416 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:02.533193 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:02.972136 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:03.031412 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:03.031582 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:03.033699 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:03.473174 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:03.531615 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:03.531747 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:03.533421 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:03.972074 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:04.031372 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:04.031507 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:04.033815 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:04.080165 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1018 08:31:04.467038 1276853 node_ready.go:57] node "addons-718596" has "Ready":"False" status (will retry)
	I1018 08:31:04.473361 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:04.536956 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:04.537908 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:04.538460 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:31:04.886403 1276853 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:31:04.886446 1276853 retry.go:31] will retry after 4.750594541s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:31:04.971515 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:05.030693 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:05.031585 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:05.033720 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:05.471345 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:05.530244 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:05.531763 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:05.533336 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:05.971270 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:06.030495 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:06.031531 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:06.034084 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:06.472131 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:06.530893 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:06.531784 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:06.533854 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 08:31:06.966382 1276853 node_ready.go:57] node "addons-718596" has "Ready":"False" status (will retry)
	I1018 08:31:06.971255 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:07.030522 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:07.030657 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:07.033948 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:07.472248 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:07.531029 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:07.531408 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:07.533261 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:07.971973 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:08.031441 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:08.031781 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:08.033256 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:08.471306 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:08.529956 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:08.531257 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:08.533184 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:08.971005 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:09.031081 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:09.031709 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:09.033938 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 08:31:09.466899 1276853 node_ready.go:57] node "addons-718596" has "Ready":"False" status (will retry)
	I1018 08:31:09.471462 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:09.530661 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:09.531668 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:09.533262 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:09.637494 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:31:09.972992 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:10.101684 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:10.102206 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:10.102699 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:10.471256 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:31:10.526557 1276853 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:31:10.526599 1276853 retry.go:31] will retry after 4.835119494s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:31:10.530309 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:10.531783 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:10.533477 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:10.971799 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:11.031240 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:11.031489 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:11.033085 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:11.472032 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:11.531249 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:11.531797 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:11.534036 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 08:31:11.965494 1276853 node_ready.go:57] node "addons-718596" has "Ready":"False" status (will retry)
	I1018 08:31:11.971417 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:12.030579 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:12.031164 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:12.033244 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:12.471145 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:12.531192 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:12.532425 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:12.533452 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:12.971810 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:13.031628 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:13.032245 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:13.039433 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:13.471820 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:13.531458 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:13.531493 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:13.533340 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 08:31:13.966460 1276853 node_ready.go:57] node "addons-718596" has "Ready":"False" status (will retry)
	I1018 08:31:13.971548 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:14.030368 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:14.031578 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:14.033957 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:14.470835 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:14.531141 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:14.531302 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:14.533568 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:14.971060 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:15.030735 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:15.033367 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:15.034294 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:15.362844 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:31:15.471229 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:15.532068 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:15.532747 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:15.537765 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 08:31:15.967527 1276853 node_ready.go:57] node "addons-718596" has "Ready":"False" status (will retry)
	I1018 08:31:15.971371 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:16.031296 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:16.032802 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:16.034043 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 08:31:16.172044 1276853 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:31:16.172077 1276853 retry.go:31] will retry after 7.484678622s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:31:16.471304 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:16.529892 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:16.531014 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:16.533057 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:16.971308 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:17.030452 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:17.032058 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:17.034046 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:17.471479 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:17.534958 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:17.535183 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:17.535436 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:17.971500 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:18.030649 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:18.031746 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:18.034465 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 08:31:18.466318 1276853 node_ready.go:57] node "addons-718596" has "Ready":"False" status (will retry)
	I1018 08:31:18.471333 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:18.530316 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:18.530694 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:18.534233 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:18.970886 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:19.030811 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:19.031398 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:19.033074 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:19.471975 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:19.530515 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:19.531319 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:19.533218 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:19.971270 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:20.031538 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:20.031793 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:20.034391 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 08:31:20.466377 1276853 node_ready.go:57] node "addons-718596" has "Ready":"False" status (will retry)
	I1018 08:31:20.470948 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:20.531034 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:20.531185 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:20.533806 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:20.972158 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:21.030144 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:21.030743 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:21.032842 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:21.471613 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:21.530335 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:21.530927 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:21.532945 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:21.971593 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:22.030373 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:22.030733 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:22.032815 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:22.471304 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:22.531267 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:22.531400 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:22.533692 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 08:31:22.966846 1276853 node_ready.go:57] node "addons-718596" has "Ready":"False" status (will retry)
	I1018 08:31:22.971425 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:23.030129 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:23.030876 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:23.033446 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:23.471698 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:23.531323 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:23.532287 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:23.533629 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:23.657893 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:31:23.972347 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:24.031347 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:24.032703 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:24.034900 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:24.472283 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:31:24.477722 1276853 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:31:24.477750 1276853 retry.go:31] will retry after 19.241906076s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:31:24.531284 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:24.532258 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:24.533119 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:24.971757 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:25.030776 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:25.032035 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:25.033478 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 08:31:25.467166 1276853 node_ready.go:57] node "addons-718596" has "Ready":"False" status (will retry)
	I1018 08:31:25.471785 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:25.531075 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:25.531768 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:25.533515 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:25.971638 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:26.031485 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:26.032269 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:26.033642 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:26.526200 1276853 node_ready.go:49] node "addons-718596" is "Ready"
	I1018 08:31:26.526230 1276853 node_ready.go:38] duration metric: took 39.562829828s for node "addons-718596" to be "Ready" ...
	I1018 08:31:26.526244 1276853 api_server.go:52] waiting for apiserver process to appear ...
	I1018 08:31:26.526300 1276853 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 08:31:26.567525 1276853 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 08:31:26.567551 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:26.570647 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:26.570976 1276853 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 08:31:26.570994 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:26.577257 1276853 api_server.go:72] duration metric: took 41.836784185s to wait for apiserver process to appear ...
	I1018 08:31:26.577280 1276853 api_server.go:88] waiting for apiserver healthz status ...
	I1018 08:31:26.577298 1276853 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 08:31:26.585979 1276853 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1018 08:31:26.586968 1276853 api_server.go:141] control plane version: v1.34.1
	I1018 08:31:26.586993 1276853 api_server.go:131] duration metric: took 9.706726ms to wait for apiserver health ...
	I1018 08:31:26.587006 1276853 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 08:31:26.610190 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:26.611483 1276853 system_pods.go:59] 19 kube-system pods found
	I1018 08:31:26.611519 1276853 system_pods.go:61] "coredns-66bc5c9577-8nftz" [1c905c00-e667-4a38-b424-7fd3901e6887] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 08:31:26.611526 1276853 system_pods.go:61] "csi-hostpath-attacher-0" [c3d36d04-6867-4193-ad59-91c9da0e76e7] Pending
	I1018 08:31:26.611532 1276853 system_pods.go:61] "csi-hostpath-resizer-0" [604115b1-8501-45a5-873c-19f7270110a0] Pending
	I1018 08:31:26.611536 1276853 system_pods.go:61] "csi-hostpathplugin-j45m4" [982988e5-cfc0-4870-adae-1be623576f01] Pending
	I1018 08:31:26.611542 1276853 system_pods.go:61] "etcd-addons-718596" [1dc9a0f9-fd9a-461a-934b-39d733f0335e] Running
	I1018 08:31:26.611550 1276853 system_pods.go:61] "kindnet-nmmrr" [12711d38-a1d4-4a75-a94f-cf45b2742438] Running
	I1018 08:31:26.611582 1276853 system_pods.go:61] "kube-apiserver-addons-718596" [f9e117f2-d65a-40eb-9e43-fd072a2a3403] Running
	I1018 08:31:26.611591 1276853 system_pods.go:61] "kube-controller-manager-addons-718596" [0af36d79-5283-49d3-9c7b-4ac00143781b] Running
	I1018 08:31:26.611598 1276853 system_pods.go:61] "kube-ingress-dns-minikube" [a97622f8-15b9-43b4-a3d9-73a22a00d96f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 08:31:26.611606 1276853 system_pods.go:61] "kube-proxy-ssljd" [a96defad-104b-440d-aeaf-4d7cb9cb8cd1] Running
	I1018 08:31:26.611611 1276853 system_pods.go:61] "kube-scheduler-addons-718596" [e0fc581a-25ee-40be-98a8-ffa382c613cb] Running
	I1018 08:31:26.611619 1276853 system_pods.go:61] "metrics-server-85b7d694d7-qkx7f" [036adf3b-4b77-4d0a-91c5-bb194f6dd6fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 08:31:26.611623 1276853 system_pods.go:61] "nvidia-device-plugin-daemonset-clntn" [5a62626d-1b58-4367-8529-c9f2ba9bae45] Pending
	I1018 08:31:26.611628 1276853 system_pods.go:61] "registry-6b586f9694-6wmvl" [348f7e05-5b38-49a1-93ae-2cbf48215d2e] Pending
	I1018 08:31:26.611634 1276853 system_pods.go:61] "registry-creds-764b6fb674-hhnk4" [5eef93e4-a2d3-4b3d-a3fc-46aaa1ecd9b6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 08:31:26.611638 1276853 system_pods.go:61] "registry-proxy-pvgzm" [d475388a-a4d8-45b8-b290-fd09079b4baf] Pending
	I1018 08:31:26.611644 1276853 system_pods.go:61] "snapshot-controller-7d9fbc56b8-4c88f" [9e6591cd-6c7b-449a-9a2a-e223fc7e547d] Pending
	I1018 08:31:26.611654 1276853 system_pods.go:61] "snapshot-controller-7d9fbc56b8-m2jxk" [de5436c4-9655-4e4e-bd42-e60a8aa87060] Pending
	I1018 08:31:26.611661 1276853 system_pods.go:61] "storage-provisioner" [ee6c3cc8-0da6-4036-8550-3f463daae2bf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 08:31:26.611666 1276853 system_pods.go:74] duration metric: took 24.655369ms to wait for pod list to return data ...
	I1018 08:31:26.611680 1276853 default_sa.go:34] waiting for default service account to be created ...
	I1018 08:31:26.616295 1276853 default_sa.go:45] found service account: "default"
	I1018 08:31:26.616320 1276853 default_sa.go:55] duration metric: took 4.633615ms for default service account to be created ...
	I1018 08:31:26.616329 1276853 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 08:31:26.619946 1276853 system_pods.go:86] 19 kube-system pods found
	I1018 08:31:26.619980 1276853 system_pods.go:89] "coredns-66bc5c9577-8nftz" [1c905c00-e667-4a38-b424-7fd3901e6887] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 08:31:26.619987 1276853 system_pods.go:89] "csi-hostpath-attacher-0" [c3d36d04-6867-4193-ad59-91c9da0e76e7] Pending
	I1018 08:31:26.619992 1276853 system_pods.go:89] "csi-hostpath-resizer-0" [604115b1-8501-45a5-873c-19f7270110a0] Pending
	I1018 08:31:26.619996 1276853 system_pods.go:89] "csi-hostpathplugin-j45m4" [982988e5-cfc0-4870-adae-1be623576f01] Pending
	I1018 08:31:26.620001 1276853 system_pods.go:89] "etcd-addons-718596" [1dc9a0f9-fd9a-461a-934b-39d733f0335e] Running
	I1018 08:31:26.620006 1276853 system_pods.go:89] "kindnet-nmmrr" [12711d38-a1d4-4a75-a94f-cf45b2742438] Running
	I1018 08:31:26.620012 1276853 system_pods.go:89] "kube-apiserver-addons-718596" [f9e117f2-d65a-40eb-9e43-fd072a2a3403] Running
	I1018 08:31:26.620020 1276853 system_pods.go:89] "kube-controller-manager-addons-718596" [0af36d79-5283-49d3-9c7b-4ac00143781b] Running
	I1018 08:31:26.620028 1276853 system_pods.go:89] "kube-ingress-dns-minikube" [a97622f8-15b9-43b4-a3d9-73a22a00d96f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 08:31:26.620037 1276853 system_pods.go:89] "kube-proxy-ssljd" [a96defad-104b-440d-aeaf-4d7cb9cb8cd1] Running
	I1018 08:31:26.620042 1276853 system_pods.go:89] "kube-scheduler-addons-718596" [e0fc581a-25ee-40be-98a8-ffa382c613cb] Running
	I1018 08:31:26.620049 1276853 system_pods.go:89] "metrics-server-85b7d694d7-qkx7f" [036adf3b-4b77-4d0a-91c5-bb194f6dd6fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 08:31:26.620057 1276853 system_pods.go:89] "nvidia-device-plugin-daemonset-clntn" [5a62626d-1b58-4367-8529-c9f2ba9bae45] Pending
	I1018 08:31:26.620061 1276853 system_pods.go:89] "registry-6b586f9694-6wmvl" [348f7e05-5b38-49a1-93ae-2cbf48215d2e] Pending
	I1018 08:31:26.620068 1276853 system_pods.go:89] "registry-creds-764b6fb674-hhnk4" [5eef93e4-a2d3-4b3d-a3fc-46aaa1ecd9b6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 08:31:26.620076 1276853 system_pods.go:89] "registry-proxy-pvgzm" [d475388a-a4d8-45b8-b290-fd09079b4baf] Pending
	I1018 08:31:26.620081 1276853 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4c88f" [9e6591cd-6c7b-449a-9a2a-e223fc7e547d] Pending
	I1018 08:31:26.620085 1276853 system_pods.go:89] "snapshot-controller-7d9fbc56b8-m2jxk" [de5436c4-9655-4e4e-bd42-e60a8aa87060] Pending
	I1018 08:31:26.620100 1276853 system_pods.go:89] "storage-provisioner" [ee6c3cc8-0da6-4036-8550-3f463daae2bf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 08:31:26.620114 1276853 retry.go:31] will retry after 230.981942ms: missing components: kube-dns
	I1018 08:31:26.861934 1276853 system_pods.go:86] 19 kube-system pods found
	I1018 08:31:26.861966 1276853 system_pods.go:89] "coredns-66bc5c9577-8nftz" [1c905c00-e667-4a38-b424-7fd3901e6887] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 08:31:26.861975 1276853 system_pods.go:89] "csi-hostpath-attacher-0" [c3d36d04-6867-4193-ad59-91c9da0e76e7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 08:31:26.861984 1276853 system_pods.go:89] "csi-hostpath-resizer-0" [604115b1-8501-45a5-873c-19f7270110a0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 08:31:26.861993 1276853 system_pods.go:89] "csi-hostpathplugin-j45m4" [982988e5-cfc0-4870-adae-1be623576f01] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 08:31:26.861998 1276853 system_pods.go:89] "etcd-addons-718596" [1dc9a0f9-fd9a-461a-934b-39d733f0335e] Running
	I1018 08:31:26.862003 1276853 system_pods.go:89] "kindnet-nmmrr" [12711d38-a1d4-4a75-a94f-cf45b2742438] Running
	I1018 08:31:26.862007 1276853 system_pods.go:89] "kube-apiserver-addons-718596" [f9e117f2-d65a-40eb-9e43-fd072a2a3403] Running
	I1018 08:31:26.862011 1276853 system_pods.go:89] "kube-controller-manager-addons-718596" [0af36d79-5283-49d3-9c7b-4ac00143781b] Running
	I1018 08:31:26.862017 1276853 system_pods.go:89] "kube-ingress-dns-minikube" [a97622f8-15b9-43b4-a3d9-73a22a00d96f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 08:31:26.862025 1276853 system_pods.go:89] "kube-proxy-ssljd" [a96defad-104b-440d-aeaf-4d7cb9cb8cd1] Running
	I1018 08:31:26.862030 1276853 system_pods.go:89] "kube-scheduler-addons-718596" [e0fc581a-25ee-40be-98a8-ffa382c613cb] Running
	I1018 08:31:26.862044 1276853 system_pods.go:89] "metrics-server-85b7d694d7-qkx7f" [036adf3b-4b77-4d0a-91c5-bb194f6dd6fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 08:31:26.862051 1276853 system_pods.go:89] "nvidia-device-plugin-daemonset-clntn" [5a62626d-1b58-4367-8529-c9f2ba9bae45] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 08:31:26.862062 1276853 system_pods.go:89] "registry-6b586f9694-6wmvl" [348f7e05-5b38-49a1-93ae-2cbf48215d2e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 08:31:26.862068 1276853 system_pods.go:89] "registry-creds-764b6fb674-hhnk4" [5eef93e4-a2d3-4b3d-a3fc-46aaa1ecd9b6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 08:31:26.862076 1276853 system_pods.go:89] "registry-proxy-pvgzm" [d475388a-a4d8-45b8-b290-fd09079b4baf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 08:31:26.862086 1276853 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4c88f" [9e6591cd-6c7b-449a-9a2a-e223fc7e547d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:31:26.862094 1276853 system_pods.go:89] "snapshot-controller-7d9fbc56b8-m2jxk" [de5436c4-9655-4e4e-bd42-e60a8aa87060] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:31:26.862107 1276853 system_pods.go:89] "storage-provisioner" [ee6c3cc8-0da6-4036-8550-3f463daae2bf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 08:31:26.862122 1276853 retry.go:31] will retry after 387.779919ms: missing components: kube-dns
	I1018 08:31:26.974160 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:27.074564 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:27.074750 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:27.074829 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:27.254006 1276853 system_pods.go:86] 19 kube-system pods found
	I1018 08:31:27.254039 1276853 system_pods.go:89] "coredns-66bc5c9577-8nftz" [1c905c00-e667-4a38-b424-7fd3901e6887] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 08:31:27.254047 1276853 system_pods.go:89] "csi-hostpath-attacher-0" [c3d36d04-6867-4193-ad59-91c9da0e76e7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 08:31:27.254055 1276853 system_pods.go:89] "csi-hostpath-resizer-0" [604115b1-8501-45a5-873c-19f7270110a0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 08:31:27.254061 1276853 system_pods.go:89] "csi-hostpathplugin-j45m4" [982988e5-cfc0-4870-adae-1be623576f01] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 08:31:27.254065 1276853 system_pods.go:89] "etcd-addons-718596" [1dc9a0f9-fd9a-461a-934b-39d733f0335e] Running
	I1018 08:31:27.254070 1276853 system_pods.go:89] "kindnet-nmmrr" [12711d38-a1d4-4a75-a94f-cf45b2742438] Running
	I1018 08:31:27.254074 1276853 system_pods.go:89] "kube-apiserver-addons-718596" [f9e117f2-d65a-40eb-9e43-fd072a2a3403] Running
	I1018 08:31:27.254078 1276853 system_pods.go:89] "kube-controller-manager-addons-718596" [0af36d79-5283-49d3-9c7b-4ac00143781b] Running
	I1018 08:31:27.254083 1276853 system_pods.go:89] "kube-ingress-dns-minikube" [a97622f8-15b9-43b4-a3d9-73a22a00d96f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 08:31:27.254087 1276853 system_pods.go:89] "kube-proxy-ssljd" [a96defad-104b-440d-aeaf-4d7cb9cb8cd1] Running
	I1018 08:31:27.254091 1276853 system_pods.go:89] "kube-scheduler-addons-718596" [e0fc581a-25ee-40be-98a8-ffa382c613cb] Running
	I1018 08:31:27.254098 1276853 system_pods.go:89] "metrics-server-85b7d694d7-qkx7f" [036adf3b-4b77-4d0a-91c5-bb194f6dd6fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 08:31:27.254105 1276853 system_pods.go:89] "nvidia-device-plugin-daemonset-clntn" [5a62626d-1b58-4367-8529-c9f2ba9bae45] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 08:31:27.254110 1276853 system_pods.go:89] "registry-6b586f9694-6wmvl" [348f7e05-5b38-49a1-93ae-2cbf48215d2e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 08:31:27.254117 1276853 system_pods.go:89] "registry-creds-764b6fb674-hhnk4" [5eef93e4-a2d3-4b3d-a3fc-46aaa1ecd9b6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 08:31:27.254123 1276853 system_pods.go:89] "registry-proxy-pvgzm" [d475388a-a4d8-45b8-b290-fd09079b4baf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 08:31:27.254129 1276853 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4c88f" [9e6591cd-6c7b-449a-9a2a-e223fc7e547d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:31:27.254137 1276853 system_pods.go:89] "snapshot-controller-7d9fbc56b8-m2jxk" [de5436c4-9655-4e4e-bd42-e60a8aa87060] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:31:27.254142 1276853 system_pods.go:89] "storage-provisioner" [ee6c3cc8-0da6-4036-8550-3f463daae2bf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 08:31:27.254156 1276853 retry.go:31] will retry after 387.411248ms: missing components: kube-dns
	I1018 08:31:27.472381 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:27.533237 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:27.533832 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:27.535262 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:27.648140 1276853 system_pods.go:86] 19 kube-system pods found
	I1018 08:31:27.648178 1276853 system_pods.go:89] "coredns-66bc5c9577-8nftz" [1c905c00-e667-4a38-b424-7fd3901e6887] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 08:31:27.648187 1276853 system_pods.go:89] "csi-hostpath-attacher-0" [c3d36d04-6867-4193-ad59-91c9da0e76e7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 08:31:27.648194 1276853 system_pods.go:89] "csi-hostpath-resizer-0" [604115b1-8501-45a5-873c-19f7270110a0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 08:31:27.648202 1276853 system_pods.go:89] "csi-hostpathplugin-j45m4" [982988e5-cfc0-4870-adae-1be623576f01] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 08:31:27.648207 1276853 system_pods.go:89] "etcd-addons-718596" [1dc9a0f9-fd9a-461a-934b-39d733f0335e] Running
	I1018 08:31:27.648212 1276853 system_pods.go:89] "kindnet-nmmrr" [12711d38-a1d4-4a75-a94f-cf45b2742438] Running
	I1018 08:31:27.648217 1276853 system_pods.go:89] "kube-apiserver-addons-718596" [f9e117f2-d65a-40eb-9e43-fd072a2a3403] Running
	I1018 08:31:27.648222 1276853 system_pods.go:89] "kube-controller-manager-addons-718596" [0af36d79-5283-49d3-9c7b-4ac00143781b] Running
	I1018 08:31:27.648228 1276853 system_pods.go:89] "kube-ingress-dns-minikube" [a97622f8-15b9-43b4-a3d9-73a22a00d96f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 08:31:27.648232 1276853 system_pods.go:89] "kube-proxy-ssljd" [a96defad-104b-440d-aeaf-4d7cb9cb8cd1] Running
	I1018 08:31:27.648244 1276853 system_pods.go:89] "kube-scheduler-addons-718596" [e0fc581a-25ee-40be-98a8-ffa382c613cb] Running
	I1018 08:31:27.648250 1276853 system_pods.go:89] "metrics-server-85b7d694d7-qkx7f" [036adf3b-4b77-4d0a-91c5-bb194f6dd6fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 08:31:27.648257 1276853 system_pods.go:89] "nvidia-device-plugin-daemonset-clntn" [5a62626d-1b58-4367-8529-c9f2ba9bae45] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 08:31:27.648269 1276853 system_pods.go:89] "registry-6b586f9694-6wmvl" [348f7e05-5b38-49a1-93ae-2cbf48215d2e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 08:31:27.648278 1276853 system_pods.go:89] "registry-creds-764b6fb674-hhnk4" [5eef93e4-a2d3-4b3d-a3fc-46aaa1ecd9b6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 08:31:27.648288 1276853 system_pods.go:89] "registry-proxy-pvgzm" [d475388a-a4d8-45b8-b290-fd09079b4baf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 08:31:27.648295 1276853 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4c88f" [9e6591cd-6c7b-449a-9a2a-e223fc7e547d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:31:27.648307 1276853 system_pods.go:89] "snapshot-controller-7d9fbc56b8-m2jxk" [de5436c4-9655-4e4e-bd42-e60a8aa87060] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:31:27.648311 1276853 system_pods.go:89] "storage-provisioner" [ee6c3cc8-0da6-4036-8550-3f463daae2bf] Running
	I1018 08:31:27.648327 1276853 retry.go:31] will retry after 383.879169ms: missing components: kube-dns
	I1018 08:31:27.971710 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:28.072779 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:28.072999 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:28.075873 1276853 system_pods.go:86] 19 kube-system pods found
	I1018 08:31:28.081909 1276853 system_pods.go:89] "coredns-66bc5c9577-8nftz" [1c905c00-e667-4a38-b424-7fd3901e6887] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 08:31:28.081930 1276853 system_pods.go:89] "csi-hostpath-attacher-0" [c3d36d04-6867-4193-ad59-91c9da0e76e7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 08:31:28.081938 1276853 system_pods.go:89] "csi-hostpath-resizer-0" [604115b1-8501-45a5-873c-19f7270110a0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 08:31:28.081946 1276853 system_pods.go:89] "csi-hostpathplugin-j45m4" [982988e5-cfc0-4870-adae-1be623576f01] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 08:31:28.081951 1276853 system_pods.go:89] "etcd-addons-718596" [1dc9a0f9-fd9a-461a-934b-39d733f0335e] Running
	I1018 08:31:28.081957 1276853 system_pods.go:89] "kindnet-nmmrr" [12711d38-a1d4-4a75-a94f-cf45b2742438] Running
	I1018 08:31:28.081967 1276853 system_pods.go:89] "kube-apiserver-addons-718596" [f9e117f2-d65a-40eb-9e43-fd072a2a3403] Running
	I1018 08:31:28.081973 1276853 system_pods.go:89] "kube-controller-manager-addons-718596" [0af36d79-5283-49d3-9c7b-4ac00143781b] Running
	I1018 08:31:28.081980 1276853 system_pods.go:89] "kube-ingress-dns-minikube" [a97622f8-15b9-43b4-a3d9-73a22a00d96f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 08:31:28.081984 1276853 system_pods.go:89] "kube-proxy-ssljd" [a96defad-104b-440d-aeaf-4d7cb9cb8cd1] Running
	I1018 08:31:28.081990 1276853 system_pods.go:89] "kube-scheduler-addons-718596" [e0fc581a-25ee-40be-98a8-ffa382c613cb] Running
	I1018 08:31:28.081997 1276853 system_pods.go:89] "metrics-server-85b7d694d7-qkx7f" [036adf3b-4b77-4d0a-91c5-bb194f6dd6fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 08:31:28.082006 1276853 system_pods.go:89] "nvidia-device-plugin-daemonset-clntn" [5a62626d-1b58-4367-8529-c9f2ba9bae45] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 08:31:28.082013 1276853 system_pods.go:89] "registry-6b586f9694-6wmvl" [348f7e05-5b38-49a1-93ae-2cbf48215d2e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 08:31:28.082020 1276853 system_pods.go:89] "registry-creds-764b6fb674-hhnk4" [5eef93e4-a2d3-4b3d-a3fc-46aaa1ecd9b6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 08:31:28.082026 1276853 system_pods.go:89] "registry-proxy-pvgzm" [d475388a-a4d8-45b8-b290-fd09079b4baf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 08:31:28.082032 1276853 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4c88f" [9e6591cd-6c7b-449a-9a2a-e223fc7e547d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:31:28.082043 1276853 system_pods.go:89] "snapshot-controller-7d9fbc56b8-m2jxk" [de5436c4-9655-4e4e-bd42-e60a8aa87060] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:31:28.082047 1276853 system_pods.go:89] "storage-provisioner" [ee6c3cc8-0da6-4036-8550-3f463daae2bf] Running
	I1018 08:31:28.082064 1276853 retry.go:31] will retry after 484.318948ms: missing components: kube-dns
	I1018 08:31:28.077214 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:28.472319 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:28.533176 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:28.542820 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:28.544593 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:28.576761 1276853 system_pods.go:86] 19 kube-system pods found
	I1018 08:31:28.576839 1276853 system_pods.go:89] "coredns-66bc5c9577-8nftz" [1c905c00-e667-4a38-b424-7fd3901e6887] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 08:31:28.576866 1276853 system_pods.go:89] "csi-hostpath-attacher-0" [c3d36d04-6867-4193-ad59-91c9da0e76e7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 08:31:28.576913 1276853 system_pods.go:89] "csi-hostpath-resizer-0" [604115b1-8501-45a5-873c-19f7270110a0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 08:31:28.576943 1276853 system_pods.go:89] "csi-hostpathplugin-j45m4" [982988e5-cfc0-4870-adae-1be623576f01] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 08:31:28.576963 1276853 system_pods.go:89] "etcd-addons-718596" [1dc9a0f9-fd9a-461a-934b-39d733f0335e] Running
	I1018 08:31:28.576984 1276853 system_pods.go:89] "kindnet-nmmrr" [12711d38-a1d4-4a75-a94f-cf45b2742438] Running
	I1018 08:31:28.577003 1276853 system_pods.go:89] "kube-apiserver-addons-718596" [f9e117f2-d65a-40eb-9e43-fd072a2a3403] Running
	I1018 08:31:28.577034 1276853 system_pods.go:89] "kube-controller-manager-addons-718596" [0af36d79-5283-49d3-9c7b-4ac00143781b] Running
	I1018 08:31:28.577057 1276853 system_pods.go:89] "kube-ingress-dns-minikube" [a97622f8-15b9-43b4-a3d9-73a22a00d96f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 08:31:28.577076 1276853 system_pods.go:89] "kube-proxy-ssljd" [a96defad-104b-440d-aeaf-4d7cb9cb8cd1] Running
	I1018 08:31:28.577107 1276853 system_pods.go:89] "kube-scheduler-addons-718596" [e0fc581a-25ee-40be-98a8-ffa382c613cb] Running
	I1018 08:31:28.577129 1276853 system_pods.go:89] "metrics-server-85b7d694d7-qkx7f" [036adf3b-4b77-4d0a-91c5-bb194f6dd6fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 08:31:28.577148 1276853 system_pods.go:89] "nvidia-device-plugin-daemonset-clntn" [5a62626d-1b58-4367-8529-c9f2ba9bae45] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 08:31:28.577169 1276853 system_pods.go:89] "registry-6b586f9694-6wmvl" [348f7e05-5b38-49a1-93ae-2cbf48215d2e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 08:31:28.577190 1276853 system_pods.go:89] "registry-creds-764b6fb674-hhnk4" [5eef93e4-a2d3-4b3d-a3fc-46aaa1ecd9b6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 08:31:28.577225 1276853 system_pods.go:89] "registry-proxy-pvgzm" [d475388a-a4d8-45b8-b290-fd09079b4baf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 08:31:28.577246 1276853 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4c88f" [9e6591cd-6c7b-449a-9a2a-e223fc7e547d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:31:28.577272 1276853 system_pods.go:89] "snapshot-controller-7d9fbc56b8-m2jxk" [de5436c4-9655-4e4e-bd42-e60a8aa87060] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:31:28.577300 1276853 system_pods.go:89] "storage-provisioner" [ee6c3cc8-0da6-4036-8550-3f463daae2bf] Running
	I1018 08:31:28.577333 1276853 retry.go:31] will retry after 892.542789ms: missing components: kube-dns
	I1018 08:31:28.972227 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:29.030069 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:29.031180 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:29.032911 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:29.477776 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:29.478597 1276853 system_pods.go:86] 19 kube-system pods found
	I1018 08:31:29.478629 1276853 system_pods.go:89] "coredns-66bc5c9577-8nftz" [1c905c00-e667-4a38-b424-7fd3901e6887] Running
	I1018 08:31:29.478672 1276853 system_pods.go:89] "csi-hostpath-attacher-0" [c3d36d04-6867-4193-ad59-91c9da0e76e7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 08:31:29.478689 1276853 system_pods.go:89] "csi-hostpath-resizer-0" [604115b1-8501-45a5-873c-19f7270110a0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 08:31:29.478697 1276853 system_pods.go:89] "csi-hostpathplugin-j45m4" [982988e5-cfc0-4870-adae-1be623576f01] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 08:31:29.478705 1276853 system_pods.go:89] "etcd-addons-718596" [1dc9a0f9-fd9a-461a-934b-39d733f0335e] Running
	I1018 08:31:29.478710 1276853 system_pods.go:89] "kindnet-nmmrr" [12711d38-a1d4-4a75-a94f-cf45b2742438] Running
	I1018 08:31:29.478715 1276853 system_pods.go:89] "kube-apiserver-addons-718596" [f9e117f2-d65a-40eb-9e43-fd072a2a3403] Running
	I1018 08:31:29.478725 1276853 system_pods.go:89] "kube-controller-manager-addons-718596" [0af36d79-5283-49d3-9c7b-4ac00143781b] Running
	I1018 08:31:29.478757 1276853 system_pods.go:89] "kube-ingress-dns-minikube" [a97622f8-15b9-43b4-a3d9-73a22a00d96f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 08:31:29.478772 1276853 system_pods.go:89] "kube-proxy-ssljd" [a96defad-104b-440d-aeaf-4d7cb9cb8cd1] Running
	I1018 08:31:29.478784 1276853 system_pods.go:89] "kube-scheduler-addons-718596" [e0fc581a-25ee-40be-98a8-ffa382c613cb] Running
	I1018 08:31:29.478790 1276853 system_pods.go:89] "metrics-server-85b7d694d7-qkx7f" [036adf3b-4b77-4d0a-91c5-bb194f6dd6fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 08:31:29.478797 1276853 system_pods.go:89] "nvidia-device-plugin-daemonset-clntn" [5a62626d-1b58-4367-8529-c9f2ba9bae45] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 08:31:29.478803 1276853 system_pods.go:89] "registry-6b586f9694-6wmvl" [348f7e05-5b38-49a1-93ae-2cbf48215d2e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 08:31:29.478809 1276853 system_pods.go:89] "registry-creds-764b6fb674-hhnk4" [5eef93e4-a2d3-4b3d-a3fc-46aaa1ecd9b6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 08:31:29.478832 1276853 system_pods.go:89] "registry-proxy-pvgzm" [d475388a-a4d8-45b8-b290-fd09079b4baf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 08:31:29.478848 1276853 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4c88f" [9e6591cd-6c7b-449a-9a2a-e223fc7e547d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:31:29.478856 1276853 system_pods.go:89] "snapshot-controller-7d9fbc56b8-m2jxk" [de5436c4-9655-4e4e-bd42-e60a8aa87060] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:31:29.478878 1276853 system_pods.go:89] "storage-provisioner" [ee6c3cc8-0da6-4036-8550-3f463daae2bf] Running
	I1018 08:31:29.478888 1276853 system_pods.go:126] duration metric: took 2.862551766s to wait for k8s-apps to be running ...
	I1018 08:31:29.478929 1276853 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 08:31:29.479007 1276853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 08:31:29.495523 1276853 system_svc.go:56] duration metric: took 16.586513ms WaitForService to wait for kubelet
	I1018 08:31:29.495590 1276853 kubeadm.go:586] duration metric: took 44.755119866s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 08:31:29.495624 1276853 node_conditions.go:102] verifying NodePressure condition ...
	I1018 08:31:29.498672 1276853 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 08:31:29.498737 1276853 node_conditions.go:123] node cpu capacity is 2
	I1018 08:31:29.498766 1276853 node_conditions.go:105] duration metric: took 3.120231ms to run NodePressure ...
	I1018 08:31:29.498791 1276853 start.go:241] waiting for startup goroutines ...
	I1018 08:31:29.532625 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:29.532977 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:29.534509 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:29.972949 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:30.073069 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:30.073313 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:30.073300 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:30.476272 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:30.533795 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:30.534218 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:30.535947 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:30.972974 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:31.035160 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:31.036077 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:31.036568 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:31.472983 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:31.535995 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:31.536578 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:31.538821 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:31.972326 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:32.036643 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:32.037166 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:32.041877 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:32.472658 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:32.535877 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:32.551001 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:32.551577 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:32.972831 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:33.030856 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:33.032723 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:33.034115 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:33.472590 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:33.531915 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:33.531916 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:33.534664 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:33.972625 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:34.033944 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:34.034064 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:34.073711 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:34.471802 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:34.532444 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:34.534130 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:34.535152 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:34.972574 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:35.034366 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:35.034793 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:35.035912 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:35.471877 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:35.531229 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:35.531334 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:35.533659 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:35.971899 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:36.033035 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:36.033345 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:36.036276 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:36.472129 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:36.531360 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:36.533871 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:36.534735 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:36.971908 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:37.072928 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:37.073252 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:37.073551 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:37.473317 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:37.530686 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:37.531407 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:37.533068 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:37.971830 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:38.030800 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:38.031448 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:38.033376 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:38.473591 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:38.532962 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:38.533417 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:38.535502 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:38.972725 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:39.033519 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:39.034767 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:39.035253 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:39.472527 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:39.533354 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:39.533732 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:39.538067 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:39.972094 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:40.034670 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:40.035231 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:40.037351 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:40.472298 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:40.532571 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:40.534011 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:40.535397 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:40.972411 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:41.031824 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:41.033070 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:41.033924 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:41.471826 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:41.531227 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:41.531371 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:41.533138 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:41.971649 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:42.032601 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:42.032765 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:42.035644 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:42.473172 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:42.533333 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:42.535365 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:42.535878 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:42.972743 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:43.031292 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:43.031752 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:43.033860 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:43.473938 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:43.531677 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:43.531949 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:43.534566 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:43.719823 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:31:43.974418 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:44.032734 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:44.034989 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:44.036687 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:44.471548 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:44.533259 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:44.534431 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:44.541004 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:44.928037 1276853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.20815594s)
	W1018 08:31:44.928077 1276853 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:31:44.928117 1276853 retry.go:31] will retry after 22.62489028s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:31:44.972879 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:45.035957 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:45.037909 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:45.038720 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:45.480249 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:45.545289 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:45.545358 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:45.545807 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:45.973082 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:46.031206 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:46.031383 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:46.033563 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:46.472124 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:46.535912 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:46.536073 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:46.537898 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:46.973830 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:47.032655 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:47.033216 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:47.035718 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:47.473390 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:47.576816 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:47.577360 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:47.578068 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:47.972564 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:48.038085 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:48.038299 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:48.039087 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:48.471350 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:48.561316 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:48.561405 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:48.561841 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:48.972286 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:49.033236 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:49.033604 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:49.035310 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:49.472151 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:49.535089 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:49.535649 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:49.536523 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:49.972503 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:50.036065 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:50.036516 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:50.037254 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:50.472237 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:50.533503 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:50.534356 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:50.536051 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:50.972180 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:51.032029 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:51.033629 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:51.035188 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:51.471941 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:51.533795 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:51.535808 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:51.536091 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:51.971645 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:52.034928 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:52.035405 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:52.035484 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:52.477759 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:52.533717 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:52.534538 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:52.535548 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:52.973256 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:53.035033 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:53.035665 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:53.036102 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:53.471209 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:53.531234 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:53.533190 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:53.534196 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:53.971865 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:54.033875 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:54.034628 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:54.037123 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:54.472473 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:54.531199 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:54.532211 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:54.533591 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:54.973007 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:55.033258 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:55.035690 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:55.037490 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:55.472398 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:55.531046 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:55.531967 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:55.533626 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:55.971780 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:56.032077 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:56.032162 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:56.034042 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:56.471861 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:56.553619 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:56.554213 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:56.555659 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:56.973122 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:57.033954 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:57.034035 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:57.034628 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:57.473010 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:57.533030 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:57.534667 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:57.536593 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:57.972263 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:58.032298 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:58.032431 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:58.034396 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:58.471726 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:58.535563 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:58.536406 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:58.537072 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:58.971881 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:59.073055 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:59.073199 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:59.073310 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:59.472346 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:59.530110 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:59.531508 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:59.533307 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:59.971907 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:00.035147 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:00.037448 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:32:00.037793 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:00.472534 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:00.533180 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:00.533496 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:32:00.534758 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:00.972445 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:01.072670 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:01.072792 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:32:01.073526 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:01.471979 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:01.532607 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:01.532771 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:32:01.534512 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:01.973642 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:02.035808 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:32:02.036787 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:02.037287 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:02.472018 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:02.531384 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:32:02.531481 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:02.533225 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:02.971945 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:03.031514 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:32:03.032417 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:03.044174 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:03.471487 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:03.531053 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:32:03.531225 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:03.533889 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:03.972436 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:04.031240 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:32:04.033146 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:04.034970 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:04.472252 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:04.531263 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:32:04.532206 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:04.533413 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:04.973050 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:05.033644 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:05.034141 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:32:05.038020 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:05.471739 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:05.535241 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:32:05.535657 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:05.538127 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:05.972245 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:06.032366 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:06.034390 1276853 kapi.go:107] duration metric: took 1m15.007247767s to wait for kubernetes.io/minikube-addons=registry ...
	I1018 08:32:06.037492 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:06.472575 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:06.532524 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:06.535505 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:06.973309 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:07.032715 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:07.033363 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:07.472336 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:07.531623 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:07.533547 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:07.553744 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:32:07.974988 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:08.032403 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:08.034147 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:08.472028 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:08.531590 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:08.533391 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:08.716882 1276853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.163049236s)
	W1018 08:32:08.716970 1276853 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:32:08.717003 1276853 retry.go:31] will retry after 31.278700369s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:32:08.971775 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:09.032614 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:09.034438 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:09.471905 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:09.530961 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:09.533278 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:09.971486 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:10.038694 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:10.039404 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:10.472279 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:10.531466 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:10.534039 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:10.972493 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:11.073441 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:11.073900 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:11.472870 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:11.532069 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:11.533985 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:11.971375 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:12.033065 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:12.034101 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:12.472255 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:12.531578 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:12.533991 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:12.972050 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:13.032601 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:13.034015 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:13.472255 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:13.532522 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:13.534049 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:13.972773 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:14.031737 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:14.034243 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:14.472447 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:14.538878 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:14.540423 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:14.972447 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:15.052505 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:15.054314 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:15.472947 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:15.531745 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:15.533853 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:15.972871 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:16.034580 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:16.036176 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:16.472109 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:16.531408 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:16.533659 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:16.972451 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:17.034376 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:17.034661 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:17.472599 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:17.531655 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:17.534410 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:17.972547 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:18.033340 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:18.035463 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:18.472358 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:18.531673 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:18.533514 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:18.975987 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:19.075190 1276853 kapi.go:107] duration metric: took 1m24.544472091s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1018 08:32:19.075657 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:19.078257 1276853 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-718596 cluster.
	I1018 08:32:19.081236 1276853 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1018 08:32:19.084301 1276853 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1018 08:32:19.472733 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:19.531518 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:19.971667 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:20.031779 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:20.478065 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:20.531053 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:20.972069 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:21.036843 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:21.472290 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:21.531428 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:21.971774 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:22.033396 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:22.472027 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:22.535965 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:22.973667 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:23.036451 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:23.476391 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:23.531442 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:23.972443 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:24.032020 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:24.475576 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:24.531259 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:24.986547 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:25.073352 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:25.472250 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:25.531362 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:25.972217 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:26.031777 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:26.471956 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:26.531190 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:26.972463 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:27.031591 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:27.472332 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:27.531202 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:27.974297 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:28.032544 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:28.471413 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:28.531352 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:28.975702 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:29.075944 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:29.476258 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:29.534132 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:29.972634 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:30.034842 1276853 kapi.go:107] duration metric: took 1m39.006839949s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1018 08:32:30.472286 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:30.972556 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:31.472527 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:31.972669 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:32.504910 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:32.971776 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:33.472216 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:33.971685 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:34.471822 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:34.972518 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:35.472691 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:35.973458 1276853 kapi.go:107] duration metric: took 1m44.505211733s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1018 08:32:39.997765 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1018 08:32:40.863164 1276853 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1018 08:32:40.863262 1276853 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1018 08:32:40.866355 1276853 out.go:179] * Enabled addons: default-storageclass, cloud-spanner, nvidia-device-plugin, amd-gpu-device-plugin, registry-creds, storage-provisioner, ingress-dns, storage-provisioner-rancher, metrics-server, yakd, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1018 08:32:40.869408 1276853 addons.go:514] duration metric: took 1m56.128538199s for enable addons: enabled=[default-storageclass cloud-spanner nvidia-device-plugin amd-gpu-device-plugin registry-creds storage-provisioner ingress-dns storage-provisioner-rancher metrics-server yakd volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1018 08:32:40.869462 1276853 start.go:246] waiting for cluster config update ...
	I1018 08:32:40.869488 1276853 start.go:255] writing updated cluster config ...
	I1018 08:32:40.869786 1276853 ssh_runner.go:195] Run: rm -f paused
	I1018 08:32:40.873432 1276853 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 08:32:40.877961 1276853 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8nftz" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:32:40.886213 1276853 pod_ready.go:94] pod "coredns-66bc5c9577-8nftz" is "Ready"
	I1018 08:32:40.886244 1276853 pod_ready.go:86] duration metric: took 8.247591ms for pod "coredns-66bc5c9577-8nftz" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:32:40.888749 1276853 pod_ready.go:83] waiting for pod "etcd-addons-718596" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:32:40.893644 1276853 pod_ready.go:94] pod "etcd-addons-718596" is "Ready"
	I1018 08:32:40.893676 1276853 pod_ready.go:86] duration metric: took 4.899517ms for pod "etcd-addons-718596" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:32:40.897098 1276853 pod_ready.go:83] waiting for pod "kube-apiserver-addons-718596" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:32:40.901857 1276853 pod_ready.go:94] pod "kube-apiserver-addons-718596" is "Ready"
	I1018 08:32:40.901882 1276853 pod_ready.go:86] duration metric: took 4.75662ms for pod "kube-apiserver-addons-718596" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:32:40.904273 1276853 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-718596" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:32:41.277891 1276853 pod_ready.go:94] pod "kube-controller-manager-addons-718596" is "Ready"
	I1018 08:32:41.277921 1276853 pod_ready.go:86] duration metric: took 373.580209ms for pod "kube-controller-manager-addons-718596" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:32:41.477375 1276853 pod_ready.go:83] waiting for pod "kube-proxy-ssljd" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:32:41.877619 1276853 pod_ready.go:94] pod "kube-proxy-ssljd" is "Ready"
	I1018 08:32:41.877652 1276853 pod_ready.go:86] duration metric: took 400.240693ms for pod "kube-proxy-ssljd" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:32:42.079529 1276853 pod_ready.go:83] waiting for pod "kube-scheduler-addons-718596" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:32:42.477252 1276853 pod_ready.go:94] pod "kube-scheduler-addons-718596" is "Ready"
	I1018 08:32:42.477295 1276853 pod_ready.go:86] duration metric: took 397.734515ms for pod "kube-scheduler-addons-718596" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:32:42.477307 1276853 pod_ready.go:40] duration metric: took 1.603843279s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 08:32:42.530327 1276853 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 08:32:42.533364 1276853 out.go:179] * Done! kubectl is now configured to use "addons-718596" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 08:35:39 addons-718596 crio[827]: time="2025-10-18T08:35:39.150369934Z" level=info msg="Removed container ad936f4411a53322e51263d1a12a7310134cb8fc9b957ec2acfc9b647ee76385: kube-system/registry-creds-764b6fb674-hhnk4/registry-creds" id=2207a9c1-4aa4-495d-b7fc-b85f99ec8660 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 08:35:41 addons-718596 crio[827]: time="2025-10-18T08:35:41.854414918Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-sk2xs/POD" id=be13dacb-119c-449c-bdfc-795ef2621495 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 08:35:41 addons-718596 crio[827]: time="2025-10-18T08:35:41.85451199Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 08:35:41 addons-718596 crio[827]: time="2025-10-18T08:35:41.86664506Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-sk2xs Namespace:default ID:cdfef88ff98e4ade992273d63923de3dd6755e35c4469d2269a81ff380076cb8 UID:fc072e62-a9fb-4083-8e15-4e0950443f9f NetNS:/var/run/netns/4453dfff-8c49-4fd8-9fe9-37a1e18319ef Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000131078}] Aliases:map[]}"
	Oct 18 08:35:41 addons-718596 crio[827]: time="2025-10-18T08:35:41.866846565Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-sk2xs to CNI network \"kindnet\" (type=ptp)"
	Oct 18 08:35:41 addons-718596 crio[827]: time="2025-10-18T08:35:41.881825074Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-sk2xs Namespace:default ID:cdfef88ff98e4ade992273d63923de3dd6755e35c4469d2269a81ff380076cb8 UID:fc072e62-a9fb-4083-8e15-4e0950443f9f NetNS:/var/run/netns/4453dfff-8c49-4fd8-9fe9-37a1e18319ef Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000131078}] Aliases:map[]}"
	Oct 18 08:35:41 addons-718596 crio[827]: time="2025-10-18T08:35:41.882123265Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-sk2xs for CNI network kindnet (type=ptp)"
	Oct 18 08:35:41 addons-718596 crio[827]: time="2025-10-18T08:35:41.892224085Z" level=info msg="Ran pod sandbox cdfef88ff98e4ade992273d63923de3dd6755e35c4469d2269a81ff380076cb8 with infra container: default/hello-world-app-5d498dc89-sk2xs/POD" id=be13dacb-119c-449c-bdfc-795ef2621495 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 08:35:41 addons-718596 crio[827]: time="2025-10-18T08:35:41.893785693Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=eb9d4362-43ee-4de6-b1ff-6a8c575ad24e name=/runtime.v1.ImageService/ImageStatus
	Oct 18 08:35:41 addons-718596 crio[827]: time="2025-10-18T08:35:41.894022093Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=eb9d4362-43ee-4de6-b1ff-6a8c575ad24e name=/runtime.v1.ImageService/ImageStatus
	Oct 18 08:35:41 addons-718596 crio[827]: time="2025-10-18T08:35:41.894140449Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=eb9d4362-43ee-4de6-b1ff-6a8c575ad24e name=/runtime.v1.ImageService/ImageStatus
	Oct 18 08:35:41 addons-718596 crio[827]: time="2025-10-18T08:35:41.895753477Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=d7b748b4-5d58-4c73-9749-6083365012be name=/runtime.v1.ImageService/PullImage
	Oct 18 08:35:41 addons-718596 crio[827]: time="2025-10-18T08:35:41.898768626Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 18 08:35:42 addons-718596 crio[827]: time="2025-10-18T08:35:42.559364035Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=d7b748b4-5d58-4c73-9749-6083365012be name=/runtime.v1.ImageService/PullImage
	Oct 18 08:35:42 addons-718596 crio[827]: time="2025-10-18T08:35:42.559960729Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=7cfd504f-1503-4d65-9b90-292a9904165a name=/runtime.v1.ImageService/ImageStatus
	Oct 18 08:35:42 addons-718596 crio[827]: time="2025-10-18T08:35:42.568798657Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=b466d962-7fc1-494d-8d85-d73ec52894a6 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 08:35:42 addons-718596 crio[827]: time="2025-10-18T08:35:42.577904796Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-sk2xs/hello-world-app" id=d737f458-6430-4795-b288-fabf02ff28a5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 08:35:42 addons-718596 crio[827]: time="2025-10-18T08:35:42.578975201Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 08:35:42 addons-718596 crio[827]: time="2025-10-18T08:35:42.588321136Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 08:35:42 addons-718596 crio[827]: time="2025-10-18T08:35:42.588531363Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/360afd16b2f8e9ca07f8417b42946c6bc036cc61e5c1c59d5f578f6734a0572c/merged/etc/passwd: no such file or directory"
	Oct 18 08:35:42 addons-718596 crio[827]: time="2025-10-18T08:35:42.588555518Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/360afd16b2f8e9ca07f8417b42946c6bc036cc61e5c1c59d5f578f6734a0572c/merged/etc/group: no such file or directory"
	Oct 18 08:35:42 addons-718596 crio[827]: time="2025-10-18T08:35:42.588854677Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 08:35:42 addons-718596 crio[827]: time="2025-10-18T08:35:42.613884834Z" level=info msg="Created container f202586479b8725a50f7df0c676002dcdd77a4defeedd4a55133a83b1282a41c: default/hello-world-app-5d498dc89-sk2xs/hello-world-app" id=d737f458-6430-4795-b288-fabf02ff28a5 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 08:35:42 addons-718596 crio[827]: time="2025-10-18T08:35:42.614606768Z" level=info msg="Starting container: f202586479b8725a50f7df0c676002dcdd77a4defeedd4a55133a83b1282a41c" id=f27603b9-da42-44a8-a0dd-84343425658b name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 08:35:42 addons-718596 crio[827]: time="2025-10-18T08:35:42.616726523Z" level=info msg="Started container" PID=7235 containerID=f202586479b8725a50f7df0c676002dcdd77a4defeedd4a55133a83b1282a41c description=default/hello-world-app-5d498dc89-sk2xs/hello-world-app id=f27603b9-da42-44a8-a0dd-84343425658b name=/runtime.v1.RuntimeService/StartContainer sandboxID=cdfef88ff98e4ade992273d63923de3dd6755e35c4469d2269a81ff380076cb8
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	f202586479b87       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   cdfef88ff98e4       hello-world-app-5d498dc89-sk2xs             default
	ff117932bc608       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             4 seconds ago            Exited              registry-creds                           1                   7c9e2b22e9fa7       registry-creds-764b6fb674-hhnk4             kube-system
	8946c14c0474a       docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0                                              2 minutes ago            Running             nginx                                    0                   96527cbbc0805       nginx                                       default
	aeb26a57fbbe3       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          2 minutes ago            Running             busybox                                  0                   6bb6f44141a81       busybox                                     default
	f2ba69481cca4       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   703b4beb211c7       csi-hostpathplugin-j45m4                    kube-system
	fee77718765ce       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   703b4beb211c7       csi-hostpathplugin-j45m4                    kube-system
	2b38f5de44735       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   703b4beb211c7       csi-hostpathplugin-j45m4                    kube-system
	60f3656a31e33       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   703b4beb211c7       csi-hostpathplugin-j45m4                    kube-system
	69416bfe918f8       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   703b4beb211c7       csi-hostpathplugin-j45m4                    kube-system
	974724db9b42c       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             3 minutes ago            Running             controller                               0                   5f4e85036be20       ingress-nginx-controller-675c5ddd98-jnjlc   ingress-nginx
	155e540f1af62       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            3 minutes ago            Running             gadget                                   0                   b76d452a09331       gadget-bht4v                                gadget
	94be5aa873ec7       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   fb0d3d4091691       gcp-auth-78565c9fb4-ftmb2                   gcp-auth
	dd33128f289f9       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             3 minutes ago            Running             local-path-provisioner                   0                   302fabf263ae0       local-path-provisioner-648f6765c9-jb247     local-path-storage
	28a70a60eb5fb       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               3 minutes ago            Running             minikube-ingress-dns                     0                   b59f4d16c35d6       kube-ingress-dns-minikube                   kube-system
	9b6001d8d045b       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   3da4b786d6a80       registry-proxy-pvgzm                        kube-system
	20dde0b8d894a       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   703b4beb211c7       csi-hostpathplugin-j45m4                    kube-system
	1a2a1784a32ed       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago            Running             csi-resizer                              0                   b7eacbf9d1753       csi-hostpath-resizer-0                      kube-system
	41255395d4ccf       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   795b96639631e       nvidia-device-plugin-daemonset-clntn        kube-system
	823da2535019f       9a80c0c8eb61cb88536fa58caaf18357fffd3e9fd0481b2781dfc6359f7654c9                                                                             3 minutes ago            Exited              patch                                    2                   39d26ec8d909d       ingress-nginx-admission-patch-mt9m7         ingress-nginx
	df452f4c1f840       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           3 minutes ago            Running             registry                                 0                   7d36a5a11c987       registry-6b586f9694-6wmvl                   kube-system
	b7afa4a4426cb       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   402d8f3a03b7c       snapshot-controller-7d9fbc56b8-4c88f        kube-system
	f56bc5e257a76       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   4 minutes ago            Exited              create                                   0                   552e06887980e       ingress-nginx-admission-create-vfgl2        ingress-nginx
	73bba38c93d1d       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             4 minutes ago            Running             csi-attacher                             0                   06741663bf9b7       csi-hostpath-attacher-0                     kube-system
	8ed328ea6d0f5       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              4 minutes ago            Running             yakd                                     0                   cee1fb1be16af       yakd-dashboard-5ff678cb9-8568g              yakd-dashboard
	3341cb4941ef2       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago            Running             volume-snapshot-controller               0                   f76f7becd7b64       snapshot-controller-7d9fbc56b8-m2jxk        kube-system
	919fd269f9a19       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               4 minutes ago            Running             cloud-spanner-emulator                   0                   f34b145e4a2f1       cloud-spanner-emulator-86bd5cbb97-8gkdk     default
	af3c01fc17b1c       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        4 minutes ago            Running             metrics-server                           0                   c2daf4aff6ee1       metrics-server-85b7d694d7-qkx7f             kube-system
	21a4115e68c1d       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   7c7eca5deb916       coredns-66bc5c9577-8nftz                    kube-system
	8a8b73b00b16e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   7182518dde626       storage-provisioner                         kube-system
	c13caa4e33e4b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             4 minutes ago            Running             kindnet-cni                              0                   8ca5ffc543117       kindnet-nmmrr                               kube-system
	c8f4c76b52ea3       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             4 minutes ago            Running             kube-proxy                               0                   6a043050176b9       kube-proxy-ssljd                            kube-system
	d3d6b4b5a780c       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             5 minutes ago            Running             kube-apiserver                           0                   01c9e6e6c9ac7       kube-apiserver-addons-718596                kube-system
	b5b1a3ea57732       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             5 minutes ago            Running             kube-scheduler                           0                   192c7a0ffac0d       kube-scheduler-addons-718596                kube-system
	3d8f28771c74b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             5 minutes ago            Running             etcd                                     0                   8d0728e96919c       etcd-addons-718596                          kube-system
	c60014395decc       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             5 minutes ago            Running             kube-controller-manager                  0                   843a6ca8c651e       kube-controller-manager-addons-718596       kube-system
	
	
	==> coredns [21a4115e68c1dc6a33076f076e3d38d44c1bd958e69b4c3ac7531a57eb042d50] <==
	[INFO] 10.244.0.16:52896 - 46349 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002055835s
	[INFO] 10.244.0.16:52896 - 55768 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000098122s
	[INFO] 10.244.0.16:52896 - 4261 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000108403s
	[INFO] 10.244.0.16:37267 - 59236 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000146088s
	[INFO] 10.244.0.16:37267 - 59047 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000206838s
	[INFO] 10.244.0.16:42280 - 9151 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00012515s
	[INFO] 10.244.0.16:42280 - 8723 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000159224s
	[INFO] 10.244.0.16:40332 - 27910 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00014183s
	[INFO] 10.244.0.16:40332 - 27688 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000233085s
	[INFO] 10.244.0.16:52570 - 56577 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001157415s
	[INFO] 10.244.0.16:52570 - 56380 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001163626s
	[INFO] 10.244.0.16:51849 - 27299 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000121359s
	[INFO] 10.244.0.16:51849 - 27487 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000132362s
	[INFO] 10.244.0.19:37279 - 15211 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000197952s
	[INFO] 10.244.0.19:54658 - 50539 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000120932s
	[INFO] 10.244.0.19:40844 - 46512 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.0001427s
	[INFO] 10.244.0.19:33349 - 38586 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000107714s
	[INFO] 10.244.0.19:46138 - 46502 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000214436s
	[INFO] 10.244.0.19:54137 - 18290 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000103218s
	[INFO] 10.244.0.19:37917 - 6226 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001876359s
	[INFO] 10.244.0.19:40127 - 58434 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002284306s
	[INFO] 10.244.0.19:58183 - 4327 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002294571s
	[INFO] 10.244.0.19:48371 - 42778 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001728596s
	[INFO] 10.244.0.23:54472 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000196492s
	[INFO] 10.244.0.23:36743 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000093232s
	
	
	==> describe nodes <==
	Name:               addons-718596
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-718596
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820
	                    minikube.k8s.io/name=addons-718596
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T08_30_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-718596
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-718596"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 08:30:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-718596
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 08:35:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 08:34:23 +0000   Sat, 18 Oct 2025 08:30:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 08:34:23 +0000   Sat, 18 Oct 2025 08:30:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 08:34:23 +0000   Sat, 18 Oct 2025 08:30:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 08:34:23 +0000   Sat, 18 Oct 2025 08:31:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-718596
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                dc9321bd-7d08-4a3c-9dd2-b8eede71a99c
	  Boot ID:                    65523ab2-bf15-4ba4-9086-da57024e96a9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m
	  default                     cloud-spanner-emulator-86bd5cbb97-8gkdk      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	  default                     hello-world-app-5d498dc89-sk2xs              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  gadget                      gadget-bht4v                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  gcp-auth                    gcp-auth-78565c9fb4-ftmb2                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-jnjlc    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m53s
	  kube-system                 coredns-66bc5c9577-8nftz                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m59s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 csi-hostpathplugin-j45m4                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 etcd-addons-718596                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m4s
	  kube-system                 kindnet-nmmrr                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      5m
	  kube-system                 kube-apiserver-addons-718596                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 kube-controller-manager-addons-718596        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  kube-system                 kube-proxy-ssljd                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kube-system                 kube-scheduler-addons-718596                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 metrics-server-85b7d694d7-qkx7f              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m54s
	  kube-system                 nvidia-device-plugin-daemonset-clntn         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 registry-6b586f9694-6wmvl                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 registry-creds-764b6fb674-hhnk4              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	  kube-system                 registry-proxy-pvgzm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 snapshot-controller-7d9fbc56b8-4c88f         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  kube-system                 snapshot-controller-7d9fbc56b8-m2jxk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  local-path-storage          local-path-provisioner-648f6765c9-jb247      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-8568g               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 4m57s  kube-proxy       
	  Normal   Starting                 5m5s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m5s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m4s   kubelet          Node addons-718596 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m4s   kubelet          Node addons-718596 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m4s   kubelet          Node addons-718596 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m     node-controller  Node addons-718596 event: Registered Node addons-718596 in Controller
	  Normal   NodeReady                4m17s  kubelet          Node addons-718596 status is now: NodeReady
	
	
	==> dmesg <==
	[ +30.749123] overlayfs: idmapped layers are currently not supported
	[Oct18 08:05] overlayfs: idmapped layers are currently not supported
	[Oct18 08:06] overlayfs: idmapped layers are currently not supported
	[Oct18 08:08] overlayfs: idmapped layers are currently not supported
	[Oct18 08:09] overlayfs: idmapped layers are currently not supported
	[Oct18 08:10] overlayfs: idmapped layers are currently not supported
	[ +38.212735] overlayfs: idmapped layers are currently not supported
	[Oct18 08:11] overlayfs: idmapped layers are currently not supported
	[Oct18 08:12] overlayfs: idmapped layers are currently not supported
	[Oct18 08:13] overlayfs: idmapped layers are currently not supported
	[  +7.848314] overlayfs: idmapped layers are currently not supported
	[Oct18 08:14] overlayfs: idmapped layers are currently not supported
	[Oct18 08:15] overlayfs: idmapped layers are currently not supported
	[Oct18 08:16] overlayfs: idmapped layers are currently not supported
	[ +29.066776] overlayfs: idmapped layers are currently not supported
	[Oct18 08:17] overlayfs: idmapped layers are currently not supported
	[Oct18 08:18] overlayfs: idmapped layers are currently not supported
	[  +0.898927] overlayfs: idmapped layers are currently not supported
	[Oct18 08:20] overlayfs: idmapped layers are currently not supported
	[  +5.259921] overlayfs: idmapped layers are currently not supported
	[Oct18 08:22] overlayfs: idmapped layers are currently not supported
	[  +6.764143] overlayfs: idmapped layers are currently not supported
	[Oct18 08:24] overlayfs: idmapped layers are currently not supported
	[Oct18 08:29] kauditd_printk_skb: 8 callbacks suppressed
	[Oct18 08:30] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [3d8f28771c74beb97c58d5f8c30b538ff9c212922906d7c5984e69d87dfde8c8] <==
	{"level":"warn","ts":"2025-10-18T08:30:35.090997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:35.118270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:35.121638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:35.143914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:35.161159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:35.172987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:35.192303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:35.204186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:35.221869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:35.237345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:35.256215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:35.271320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:35.292732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:35.321656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:35.330454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:35.378266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:35.384827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:35.402819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:35.494482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:51.604219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:51.623632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:31:13.171059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:31:13.186742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:31:13.230524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:31:13.253755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40464","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [94be5aa873ec727ccddb1ea1b2875bc26b001adeb30bb66792f8fa88896103df] <==
	2025/10/18 08:32:18 GCP Auth Webhook started!
	2025/10/18 08:32:42 Ready to marshal response ...
	2025/10/18 08:32:42 Ready to write response ...
	2025/10/18 08:32:43 Ready to marshal response ...
	2025/10/18 08:32:43 Ready to write response ...
	2025/10/18 08:32:43 Ready to marshal response ...
	2025/10/18 08:32:43 Ready to write response ...
	2025/10/18 08:33:03 Ready to marshal response ...
	2025/10/18 08:33:03 Ready to write response ...
	2025/10/18 08:33:16 Ready to marshal response ...
	2025/10/18 08:33:16 Ready to write response ...
	2025/10/18 08:33:19 Ready to marshal response ...
	2025/10/18 08:33:19 Ready to write response ...
	2025/10/18 08:33:44 Ready to marshal response ...
	2025/10/18 08:33:44 Ready to write response ...
	2025/10/18 08:34:07 Ready to marshal response ...
	2025/10/18 08:34:07 Ready to write response ...
	2025/10/18 08:34:07 Ready to marshal response ...
	2025/10/18 08:34:07 Ready to write response ...
	2025/10/18 08:34:15 Ready to marshal response ...
	2025/10/18 08:34:15 Ready to write response ...
	2025/10/18 08:35:41 Ready to marshal response ...
	2025/10/18 08:35:41 Ready to write response ...
	
	
	==> kernel <==
	 08:35:43 up 10:18,  0 user,  load average: 0.51, 1.32, 2.11
	Linux addons-718596 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c13caa4e33e4b6e47382b70365473a8c332ce93c8e5eeb30bd76b72c8a77d787] <==
	I1018 08:33:35.616502       1 main.go:301] handling current node
	I1018 08:33:45.616123       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:33:45.616241       1 main.go:301] handling current node
	I1018 08:33:55.616722       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:33:55.616754       1 main.go:301] handling current node
	I1018 08:34:05.620607       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:34:05.620641       1 main.go:301] handling current node
	I1018 08:34:15.619552       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:34:15.619629       1 main.go:301] handling current node
	I1018 08:34:25.616553       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:34:25.616588       1 main.go:301] handling current node
	I1018 08:34:35.616573       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:34:35.616609       1 main.go:301] handling current node
	I1018 08:34:45.616068       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:34:45.616226       1 main.go:301] handling current node
	I1018 08:34:55.616588       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:34:55.616620       1 main.go:301] handling current node
	I1018 08:35:05.620132       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:35:05.620167       1 main.go:301] handling current node
	I1018 08:35:15.616768       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:35:15.616801       1 main.go:301] handling current node
	I1018 08:35:25.619338       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:35:25.619369       1 main.go:301] handling current node
	I1018 08:35:35.620002       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:35:35.620038       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d3d6b4b5a780c831e4df944a01c79fc26b915752ab82f96bfa50a8db6eeb10ca] <==
	W1018 08:31:13.246097       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1018 08:31:26.150028       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.55.23:443: connect: connection refused
	E1018 08:31:26.150152       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.55.23:443: connect: connection refused" logger="UnhandledError"
	W1018 08:31:26.170229       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.55.23:443: connect: connection refused
	E1018 08:31:26.170407       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.55.23:443: connect: connection refused" logger="UnhandledError"
	W1018 08:31:26.218675       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.55.23:443: connect: connection refused
	E1018 08:31:26.218810       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.55.23:443: connect: connection refused" logger="UnhandledError"
	E1018 08:31:32.593365       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.184.249:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.184.249:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.184.249:443: connect: connection refused" logger="UnhandledError"
	W1018 08:31:32.593973       1 handler_proxy.go:99] no RequestInfo found in the context
	E1018 08:31:32.594027       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1018 08:31:32.595647       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.184.249:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.184.249:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.184.249:443: connect: connection refused" logger="UnhandledError"
	E1018 08:31:32.600383       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.184.249:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.184.249:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.184.249:443: connect: connection refused" logger="UnhandledError"
	E1018 08:31:32.621490       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.184.249:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.184.249:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.184.249:443: connect: connection refused" logger="UnhandledError"
	I1018 08:31:32.798500       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1018 08:32:51.526208       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:56388: use of closed network connection
	E1018 08:32:51.758072       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:56404: use of closed network connection
	E1018 08:32:51.893100       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:56424: use of closed network connection
	I1018 08:33:19.470263       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1018 08:33:19.866381       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.69.90"}
	I1018 08:33:28.896801       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1018 08:33:31.095098       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1018 08:35:41.725049       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.156.242"}
	
	
	==> kube-controller-manager [c60014395decc7e3a08e95d661ec9973575293fea6d721129b4f610cd9045de2] <==
	I1018 08:30:43.165330       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 08:30:43.165405       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-718596"
	I1018 08:30:43.165448       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1018 08:30:43.173317       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 08:30:43.181022       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 08:30:43.183639       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 08:30:43.189299       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 08:30:43.199947       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 08:30:43.202136       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 08:30:43.202437       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 08:30:43.202598       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 08:30:43.203626       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 08:30:43.203782       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 08:30:43.203798       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 08:30:43.203807       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 08:30:43.207913       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	E1018 08:30:49.707356       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1018 08:31:13.164181       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1018 08:31:13.164343       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1018 08:31:13.164402       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1018 08:31:13.215003       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1018 08:31:13.221478       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1018 08:31:13.265371       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 08:31:13.322468       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 08:31:28.173028       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [c8f4c76b52ea3dbdf50b9feeeef868087ebe9f8d3a8596d5c5b2c3f5fe5faad5] <==
	I1018 08:30:45.799966       1 server_linux.go:53] "Using iptables proxy"
	I1018 08:30:45.894131       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 08:30:45.994999       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 08:30:45.995047       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 08:30:45.995144       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 08:30:46.043364       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 08:30:46.046826       1 server_linux.go:132] "Using iptables Proxier"
	I1018 08:30:46.052100       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 08:30:46.052395       1 server.go:527] "Version info" version="v1.34.1"
	I1018 08:30:46.052409       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 08:30:46.057815       1 config.go:200] "Starting service config controller"
	I1018 08:30:46.057833       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 08:30:46.065844       1 config.go:106] "Starting endpoint slice config controller"
	I1018 08:30:46.065865       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 08:30:46.065884       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 08:30:46.065888       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 08:30:46.073515       1 config.go:309] "Starting node config controller"
	I1018 08:30:46.076566       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 08:30:46.076586       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 08:30:46.166657       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 08:30:46.166697       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 08:30:46.166728       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [b5b1a3ea57732699826bd43ca767e85f48ee04112206faf1a4a459e8820cf3d8] <==
	E1018 08:30:36.326150       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 08:30:36.326272       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 08:30:36.326358       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 08:30:36.326521       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 08:30:36.331295       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 08:30:36.331350       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 08:30:36.331404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 08:30:36.331475       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 08:30:36.331525       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 08:30:36.331574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 08:30:36.331621       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 08:30:36.331702       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 08:30:36.331752       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 08:30:36.331820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 08:30:36.331902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 08:30:36.332005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 08:30:36.332069       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 08:30:37.111506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 08:30:37.225068       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 08:30:37.230414       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 08:30:37.322951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 08:30:37.362449       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 08:30:37.383983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 08:30:37.453836       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1018 08:30:39.596337       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 08:34:17 addons-718596 kubelet[1288]: I1018 08:34:17.473016    1288 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/0629a121-fc26-4fea-b489-305a64e1ef8d-data\") on node \"addons-718596\" DevicePath \"\""
	Oct 18 08:34:18 addons-718596 kubelet[1288]: I1018 08:34:18.258808    1288 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbcd0a551f6dd3ae5dfded3c4b23b05329d6b0b3b70c7d372858f47c19d64147"
	Oct 18 08:34:18 addons-718596 kubelet[1288]: I1018 08:34:18.964822    1288 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0629a121-fc26-4fea-b489-305a64e1ef8d" path="/var/lib/kubelet/pods/0629a121-fc26-4fea-b489-305a64e1ef8d/volumes"
	Oct 18 08:34:35 addons-718596 kubelet[1288]: I1018 08:34:35.962461    1288 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-clntn" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 08:34:39 addons-718596 kubelet[1288]: I1018 08:34:39.067020    1288 scope.go:117] "RemoveContainer" containerID="4a305505fcc8e054b847aaebf5cb451ffcb861ba22e93c34a450326ca42792c2"
	Oct 18 08:34:39 addons-718596 kubelet[1288]: I1018 08:34:39.080769    1288 scope.go:117] "RemoveContainer" containerID="cceeccf270ddc335e76248f0ed09fd24bd7e667852681fce1f8c804c0e040de5"
	Oct 18 08:34:39 addons-718596 kubelet[1288]: E1018 08:34:39.121832    1288 manager.go:1116] Failed to create existing container: /crio-3df46d12358ed657790e52f47835d788ebdeb9abf0bb8eaf3257ae9c93d19f89: Error finding container 3df46d12358ed657790e52f47835d788ebdeb9abf0bb8eaf3257ae9c93d19f89: Status 404 returned error can't find the container with id 3df46d12358ed657790e52f47835d788ebdeb9abf0bb8eaf3257ae9c93d19f89
	Oct 18 08:34:39 addons-718596 kubelet[1288]: I1018 08:34:39.962777    1288 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-pvgzm" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 08:35:23 addons-718596 kubelet[1288]: I1018 08:35:23.962109    1288 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-6wmvl" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 08:35:36 addons-718596 kubelet[1288]: I1018 08:35:36.462664    1288 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-hhnk4" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 08:35:36 addons-718596 kubelet[1288]: W1018 08:35:36.489610    1288 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/1da112bdf57ccc7d8da6bcbee61a9f3bab5ab8a465f139996599b3bb8d462292/crio-7c9e2b22e9fa737f4b257d628e63c88ae933ecf087d343e371d54f210d1e079d WatchSource:0}: Error finding container 7c9e2b22e9fa737f4b257d628e63c88ae933ecf087d343e371d54f210d1e079d: Status 404 returned error can't find the container with id 7c9e2b22e9fa737f4b257d628e63c88ae933ecf087d343e371d54f210d1e079d
	Oct 18 08:35:38 addons-718596 kubelet[1288]: I1018 08:35:38.548992    1288 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-hhnk4" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 08:35:38 addons-718596 kubelet[1288]: I1018 08:35:38.549052    1288 scope.go:117] "RemoveContainer" containerID="ad936f4411a53322e51263d1a12a7310134cb8fc9b957ec2acfc9b647ee76385"
	Oct 18 08:35:39 addons-718596 kubelet[1288]: E1018 08:35:39.120993    1288 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/2f263d2ba0dc3a5a7b42656f3a1a1181eb0db5b2dade7477e96d9fa66c78177b/diff" to get inode usage: stat /var/lib/containers/storage/overlay/2f263d2ba0dc3a5a7b42656f3a1a1181eb0db5b2dade7477e96d9fa66c78177b/diff: no such file or directory, extraDiskErr: <nil>
	Oct 18 08:35:39 addons-718596 kubelet[1288]: I1018 08:35:39.132952    1288 scope.go:117] "RemoveContainer" containerID="ad936f4411a53322e51263d1a12a7310134cb8fc9b957ec2acfc9b647ee76385"
	Oct 18 08:35:39 addons-718596 kubelet[1288]: I1018 08:35:39.555151    1288 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-hhnk4" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 08:35:39 addons-718596 kubelet[1288]: I1018 08:35:39.555669    1288 scope.go:117] "RemoveContainer" containerID="ff117932bc6086797dadb254b5ae91789d8f7d047563eec2f01f1d4d99b49b4e"
	Oct 18 08:35:39 addons-718596 kubelet[1288]: E1018 08:35:39.556280    1288 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-hhnk4_kube-system(5eef93e4-a2d3-4b3d-a3fc-46aaa1ecd9b6)\"" pod="kube-system/registry-creds-764b6fb674-hhnk4" podUID="5eef93e4-a2d3-4b3d-a3fc-46aaa1ecd9b6"
	Oct 18 08:35:40 addons-718596 kubelet[1288]: I1018 08:35:40.558600    1288 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-hhnk4" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 08:35:40 addons-718596 kubelet[1288]: I1018 08:35:40.559201    1288 scope.go:117] "RemoveContainer" containerID="ff117932bc6086797dadb254b5ae91789d8f7d047563eec2f01f1d4d99b49b4e"
	Oct 18 08:35:40 addons-718596 kubelet[1288]: E1018 08:35:40.559513    1288 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-hhnk4_kube-system(5eef93e4-a2d3-4b3d-a3fc-46aaa1ecd9b6)\"" pod="kube-system/registry-creds-764b6fb674-hhnk4" podUID="5eef93e4-a2d3-4b3d-a3fc-46aaa1ecd9b6"
	Oct 18 08:35:41 addons-718596 kubelet[1288]: I1018 08:35:41.632715    1288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv2pp\" (UniqueName: \"kubernetes.io/projected/fc072e62-a9fb-4083-8e15-4e0950443f9f-kube-api-access-dv2pp\") pod \"hello-world-app-5d498dc89-sk2xs\" (UID: \"fc072e62-a9fb-4083-8e15-4e0950443f9f\") " pod="default/hello-world-app-5d498dc89-sk2xs"
	Oct 18 08:35:41 addons-718596 kubelet[1288]: I1018 08:35:41.632798    1288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/fc072e62-a9fb-4083-8e15-4e0950443f9f-gcp-creds\") pod \"hello-world-app-5d498dc89-sk2xs\" (UID: \"fc072e62-a9fb-4083-8e15-4e0950443f9f\") " pod="default/hello-world-app-5d498dc89-sk2xs"
	Oct 18 08:35:41 addons-718596 kubelet[1288]: W1018 08:35:41.888890    1288 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/1da112bdf57ccc7d8da6bcbee61a9f3bab5ab8a465f139996599b3bb8d462292/crio-cdfef88ff98e4ade992273d63923de3dd6755e35c4469d2269a81ff380076cb8 WatchSource:0}: Error finding container cdfef88ff98e4ade992273d63923de3dd6755e35c4469d2269a81ff380076cb8: Status 404 returned error can't find the container with id cdfef88ff98e4ade992273d63923de3dd6755e35c4469d2269a81ff380076cb8
	Oct 18 08:35:43 addons-718596 kubelet[1288]: I1018 08:35:43.596104    1288 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-sk2xs" podStartSLOduration=1.9297936770000002 podStartE2EDuration="2.596083853s" podCreationTimestamp="2025-10-18 08:35:41 +0000 UTC" firstStartedPulling="2025-10-18 08:35:41.894517776 +0000 UTC m=+303.056115896" lastFinishedPulling="2025-10-18 08:35:42.560807943 +0000 UTC m=+303.722406072" observedRunningTime="2025-10-18 08:35:43.595483147 +0000 UTC m=+304.757081267" watchObservedRunningTime="2025-10-18 08:35:43.596083853 +0000 UTC m=+304.757681973"
	
	
	==> storage-provisioner [8a8b73b00b16ea37452d2de32d3703276b55fbacd22ec17ae9cc096aec490df7] <==
	W1018 08:35:17.976940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:35:19.979968       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:35:19.987239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:35:21.990855       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:35:21.995637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:35:23.998430       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:35:24.006360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:35:26.012200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:35:26.017250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:35:28.021171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:35:28.025714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:35:30.033940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:35:30.048994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:35:32.052204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:35:32.057104       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:35:34.060585       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:35:34.067651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:35:36.071177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:35:36.075834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:35:38.079461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:35:38.084604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:35:40.088178       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:35:40.095304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:35:42.108001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:35:42.121463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-718596 -n addons-718596
helpers_test.go:269: (dbg) Run:  kubectl --context addons-718596 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-vfgl2 ingress-nginx-admission-patch-mt9m7
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-718596 describe pod ingress-nginx-admission-create-vfgl2 ingress-nginx-admission-patch-mt9m7
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-718596 describe pod ingress-nginx-admission-create-vfgl2 ingress-nginx-admission-patch-mt9m7: exit status 1 (121.814806ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-vfgl2" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-mt9m7" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-718596 describe pod ingress-nginx-admission-create-vfgl2 ingress-nginx-admission-patch-mt9m7: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-718596 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-718596 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (440.262051ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 08:35:45.141566 1286621 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:35:45.145756 1286621 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:35:45.145787 1286621 out.go:374] Setting ErrFile to fd 2...
	I1018 08:35:45.145796 1286621 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:35:45.146137 1286621 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 08:35:45.146528 1286621 mustload.go:65] Loading cluster: addons-718596
	I1018 08:35:45.147015 1286621 config.go:182] Loaded profile config "addons-718596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:35:45.147032 1286621 addons.go:606] checking whether the cluster is paused
	I1018 08:35:45.147160 1286621 config.go:182] Loaded profile config "addons-718596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:35:45.147178 1286621 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:35:45.147751 1286621 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:35:45.173473 1286621 ssh_runner.go:195] Run: systemctl --version
	I1018 08:35:45.173547 1286621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:35:45.202262 1286621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:35:45.359278 1286621 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 08:35:45.359434 1286621 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 08:35:45.431757 1286621 cri.go:89] found id: "ff117932bc6086797dadb254b5ae91789d8f7d047563eec2f01f1d4d99b49b4e"
	I1018 08:35:45.431783 1286621 cri.go:89] found id: "f2ba69481cca4d0f5a25c5beaad63e76454385180b30477c08c5dd39ec11f960"
	I1018 08:35:45.431788 1286621 cri.go:89] found id: "fee77718765ce01862d3826830d1cd1b63df4294ae5007413d147c5b6f353493"
	I1018 08:35:45.431793 1286621 cri.go:89] found id: "2b38f5de44735a72df017a6ba528430e174db28f03821d00d635d4b6bf461eb9"
	I1018 08:35:45.431797 1286621 cri.go:89] found id: "60f3656a31e3365232ea3a50fda1e9925de06d7f546e566ed63caa3b6ddc6a26"
	I1018 08:35:45.431801 1286621 cri.go:89] found id: "69416bfe918f8c0355a6e848e7c20cc2a0644f592e4af4ce241f48fa38460e38"
	I1018 08:35:45.431805 1286621 cri.go:89] found id: "28a70a60eb5fbd7784b4dc8b229decc07e1ead52bd5c852f7c9fdb71b87298e6"
	I1018 08:35:45.431808 1286621 cri.go:89] found id: "9b6001d8d045befc23b48c768a68764d88dbf272340c086878656541cf475eae"
	I1018 08:35:45.431812 1286621 cri.go:89] found id: "20dde0b8d894a22ef9350e1d77e8944af9e5892634e205a207e3b8f888c608b1"
	I1018 08:35:45.431818 1286621 cri.go:89] found id: "1a2a1784a32edcacf12954ced37433e7d2a873d907689eb4a650272d4227e914"
	I1018 08:35:45.431822 1286621 cri.go:89] found id: "41255395d4ccf30b4f83f7286e24fddc9877529056e3b6550176417a18c6ec37"
	I1018 08:35:45.431826 1286621 cri.go:89] found id: "df452f4c1f840a0419fcac5bf2f09de27afcb9908345e644113dc8d6933a8ea1"
	I1018 08:35:45.431829 1286621 cri.go:89] found id: "b7afa4a4426cb6b1ac0da51b0491d588ab2af4410fd28370bf8fa39172e5f813"
	I1018 08:35:45.431832 1286621 cri.go:89] found id: "73bba38c93d1dd848863605a0e040b6cc93f6c2ae44da4b521ffd096fc0e7cef"
	I1018 08:35:45.431835 1286621 cri.go:89] found id: "3341cb4941ef2d1b3b62f6350adf989a06c0a11fdca10b1d2a4d0be1a73e4dac"
	I1018 08:35:45.431883 1286621 cri.go:89] found id: "af3c01fc17b1cf65bfc660209278c4a4b5768989098968a12a530d54cf2e3e99"
	I1018 08:35:45.431891 1286621 cri.go:89] found id: "21a4115e68c1dc6a33076f076e3d38d44c1bd958e69b4c3ac7531a57eb042d50"
	I1018 08:35:45.431897 1286621 cri.go:89] found id: "8a8b73b00b16ea37452d2de32d3703276b55fbacd22ec17ae9cc096aec490df7"
	I1018 08:35:45.431900 1286621 cri.go:89] found id: "c13caa4e33e4b6e47382b70365473a8c332ce93c8e5eeb30bd76b72c8a77d787"
	I1018 08:35:45.431904 1286621 cri.go:89] found id: "c8f4c76b52ea3dbdf50b9feeeef868087ebe9f8d3a8596d5c5b2c3f5fe5faad5"
	I1018 08:35:45.431909 1286621 cri.go:89] found id: "d3d6b4b5a780c831e4df944a01c79fc26b915752ab82f96bfa50a8db6eeb10ca"
	I1018 08:35:45.431912 1286621 cri.go:89] found id: "b5b1a3ea57732699826bd43ca767e85f48ee04112206faf1a4a459e8820cf3d8"
	I1018 08:35:45.431915 1286621 cri.go:89] found id: "3d8f28771c74beb97c58d5f8c30b538ff9c212922906d7c5984e69d87dfde8c8"
	I1018 08:35:45.431918 1286621 cri.go:89] found id: "c60014395decc7e3a08e95d661ec9973575293fea6d721129b4f610cd9045de2"
	I1018 08:35:45.431921 1286621 cri.go:89] found id: ""
	I1018 08:35:45.431984 1286621 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 08:35:45.451355 1286621 out.go:203] 
	W1018 08:35:45.454130 1286621 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:35:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:35:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 08:35:45.454155 1286621 out.go:285] * 
	* 
	W1018 08:35:45.463233 1286621 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 08:35:45.466252 1286621 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-718596 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-718596 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-718596 addons disable ingress --alsologtostderr -v=1: exit status 11 (272.547223ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 08:35:45.525950 1286664 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:35:45.527410 1286664 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:35:45.527431 1286664 out.go:374] Setting ErrFile to fd 2...
	I1018 08:35:45.527439 1286664 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:35:45.527786 1286664 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 08:35:45.528166 1286664 mustload.go:65] Loading cluster: addons-718596
	I1018 08:35:45.528633 1286664 config.go:182] Loaded profile config "addons-718596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:35:45.528656 1286664 addons.go:606] checking whether the cluster is paused
	I1018 08:35:45.528808 1286664 config.go:182] Loaded profile config "addons-718596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:35:45.528840 1286664 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:35:45.529342 1286664 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:35:45.549070 1286664 ssh_runner.go:195] Run: systemctl --version
	I1018 08:35:45.549177 1286664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:35:45.568994 1286664 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:35:45.674155 1286664 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 08:35:45.674229 1286664 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 08:35:45.708739 1286664 cri.go:89] found id: "ff117932bc6086797dadb254b5ae91789d8f7d047563eec2f01f1d4d99b49b4e"
	I1018 08:35:45.708759 1286664 cri.go:89] found id: "f2ba69481cca4d0f5a25c5beaad63e76454385180b30477c08c5dd39ec11f960"
	I1018 08:35:45.708764 1286664 cri.go:89] found id: "fee77718765ce01862d3826830d1cd1b63df4294ae5007413d147c5b6f353493"
	I1018 08:35:45.708768 1286664 cri.go:89] found id: "2b38f5de44735a72df017a6ba528430e174db28f03821d00d635d4b6bf461eb9"
	I1018 08:35:45.708772 1286664 cri.go:89] found id: "60f3656a31e3365232ea3a50fda1e9925de06d7f546e566ed63caa3b6ddc6a26"
	I1018 08:35:45.708775 1286664 cri.go:89] found id: "69416bfe918f8c0355a6e848e7c20cc2a0644f592e4af4ce241f48fa38460e38"
	I1018 08:35:45.708803 1286664 cri.go:89] found id: "28a70a60eb5fbd7784b4dc8b229decc07e1ead52bd5c852f7c9fdb71b87298e6"
	I1018 08:35:45.708808 1286664 cri.go:89] found id: "9b6001d8d045befc23b48c768a68764d88dbf272340c086878656541cf475eae"
	I1018 08:35:45.708812 1286664 cri.go:89] found id: "20dde0b8d894a22ef9350e1d77e8944af9e5892634e205a207e3b8f888c608b1"
	I1018 08:35:45.708818 1286664 cri.go:89] found id: "1a2a1784a32edcacf12954ced37433e7d2a873d907689eb4a650272d4227e914"
	I1018 08:35:45.708822 1286664 cri.go:89] found id: "41255395d4ccf30b4f83f7286e24fddc9877529056e3b6550176417a18c6ec37"
	I1018 08:35:45.708825 1286664 cri.go:89] found id: "df452f4c1f840a0419fcac5bf2f09de27afcb9908345e644113dc8d6933a8ea1"
	I1018 08:35:45.708828 1286664 cri.go:89] found id: "b7afa4a4426cb6b1ac0da51b0491d588ab2af4410fd28370bf8fa39172e5f813"
	I1018 08:35:45.708831 1286664 cri.go:89] found id: "73bba38c93d1dd848863605a0e040b6cc93f6c2ae44da4b521ffd096fc0e7cef"
	I1018 08:35:45.708835 1286664 cri.go:89] found id: "3341cb4941ef2d1b3b62f6350adf989a06c0a11fdca10b1d2a4d0be1a73e4dac"
	I1018 08:35:45.708840 1286664 cri.go:89] found id: "af3c01fc17b1cf65bfc660209278c4a4b5768989098968a12a530d54cf2e3e99"
	I1018 08:35:45.708846 1286664 cri.go:89] found id: "21a4115e68c1dc6a33076f076e3d38d44c1bd958e69b4c3ac7531a57eb042d50"
	I1018 08:35:45.708851 1286664 cri.go:89] found id: "8a8b73b00b16ea37452d2de32d3703276b55fbacd22ec17ae9cc096aec490df7"
	I1018 08:35:45.708855 1286664 cri.go:89] found id: "c13caa4e33e4b6e47382b70365473a8c332ce93c8e5eeb30bd76b72c8a77d787"
	I1018 08:35:45.708858 1286664 cri.go:89] found id: "c8f4c76b52ea3dbdf50b9feeeef868087ebe9f8d3a8596d5c5b2c3f5fe5faad5"
	I1018 08:35:45.708877 1286664 cri.go:89] found id: "d3d6b4b5a780c831e4df944a01c79fc26b915752ab82f96bfa50a8db6eeb10ca"
	I1018 08:35:45.708883 1286664 cri.go:89] found id: "b5b1a3ea57732699826bd43ca767e85f48ee04112206faf1a4a459e8820cf3d8"
	I1018 08:35:45.708886 1286664 cri.go:89] found id: "3d8f28771c74beb97c58d5f8c30b538ff9c212922906d7c5984e69d87dfde8c8"
	I1018 08:35:45.708889 1286664 cri.go:89] found id: "c60014395decc7e3a08e95d661ec9973575293fea6d721129b4f610cd9045de2"
	I1018 08:35:45.708892 1286664 cri.go:89] found id: ""
	I1018 08:35:45.708954 1286664 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 08:35:45.723560 1286664 out.go:203] 
	W1018 08:35:45.726487 1286664 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:35:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:35:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 08:35:45.726509 1286664 out.go:285] * 
	* 
	W1018 08:35:45.735334 1286664 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 08:35:45.738195 1286664 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-718596 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (146.64s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.31s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-bht4v" [dd1abbd2-3f0b-4a9f-bee2-1447eb2b1b01] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004148027s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-718596 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-718596 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (308.665447ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 08:33:18.865438 1284045 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:33:18.866822 1284045 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:33:18.866846 1284045 out.go:374] Setting ErrFile to fd 2...
	I1018 08:33:18.866852 1284045 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:33:18.867138 1284045 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 08:33:18.867460 1284045 mustload.go:65] Loading cluster: addons-718596
	I1018 08:33:18.867924 1284045 config.go:182] Loaded profile config "addons-718596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:33:18.867941 1284045 addons.go:606] checking whether the cluster is paused
	I1018 08:33:18.868051 1284045 config.go:182] Loaded profile config "addons-718596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:33:18.868067 1284045 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:33:18.868544 1284045 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:33:18.888560 1284045 ssh_runner.go:195] Run: systemctl --version
	I1018 08:33:18.888629 1284045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:33:18.909720 1284045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:33:19.026810 1284045 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 08:33:19.026909 1284045 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 08:33:19.070471 1284045 cri.go:89] found id: "f2ba69481cca4d0f5a25c5beaad63e76454385180b30477c08c5dd39ec11f960"
	I1018 08:33:19.070494 1284045 cri.go:89] found id: "fee77718765ce01862d3826830d1cd1b63df4294ae5007413d147c5b6f353493"
	I1018 08:33:19.070499 1284045 cri.go:89] found id: "2b38f5de44735a72df017a6ba528430e174db28f03821d00d635d4b6bf461eb9"
	I1018 08:33:19.070503 1284045 cri.go:89] found id: "60f3656a31e3365232ea3a50fda1e9925de06d7f546e566ed63caa3b6ddc6a26"
	I1018 08:33:19.070506 1284045 cri.go:89] found id: "69416bfe918f8c0355a6e848e7c20cc2a0644f592e4af4ce241f48fa38460e38"
	I1018 08:33:19.070510 1284045 cri.go:89] found id: "28a70a60eb5fbd7784b4dc8b229decc07e1ead52bd5c852f7c9fdb71b87298e6"
	I1018 08:33:19.070513 1284045 cri.go:89] found id: "9b6001d8d045befc23b48c768a68764d88dbf272340c086878656541cf475eae"
	I1018 08:33:19.070517 1284045 cri.go:89] found id: "20dde0b8d894a22ef9350e1d77e8944af9e5892634e205a207e3b8f888c608b1"
	I1018 08:33:19.070520 1284045 cri.go:89] found id: "1a2a1784a32edcacf12954ced37433e7d2a873d907689eb4a650272d4227e914"
	I1018 08:33:19.070527 1284045 cri.go:89] found id: "41255395d4ccf30b4f83f7286e24fddc9877529056e3b6550176417a18c6ec37"
	I1018 08:33:19.070530 1284045 cri.go:89] found id: "df452f4c1f840a0419fcac5bf2f09de27afcb9908345e644113dc8d6933a8ea1"
	I1018 08:33:19.070533 1284045 cri.go:89] found id: "b7afa4a4426cb6b1ac0da51b0491d588ab2af4410fd28370bf8fa39172e5f813"
	I1018 08:33:19.070537 1284045 cri.go:89] found id: "73bba38c93d1dd848863605a0e040b6cc93f6c2ae44da4b521ffd096fc0e7cef"
	I1018 08:33:19.070540 1284045 cri.go:89] found id: "3341cb4941ef2d1b3b62f6350adf989a06c0a11fdca10b1d2a4d0be1a73e4dac"
	I1018 08:33:19.070543 1284045 cri.go:89] found id: "af3c01fc17b1cf65bfc660209278c4a4b5768989098968a12a530d54cf2e3e99"
	I1018 08:33:19.070552 1284045 cri.go:89] found id: "21a4115e68c1dc6a33076f076e3d38d44c1bd958e69b4c3ac7531a57eb042d50"
	I1018 08:33:19.070558 1284045 cri.go:89] found id: "8a8b73b00b16ea37452d2de32d3703276b55fbacd22ec17ae9cc096aec490df7"
	I1018 08:33:19.070567 1284045 cri.go:89] found id: "c13caa4e33e4b6e47382b70365473a8c332ce93c8e5eeb30bd76b72c8a77d787"
	I1018 08:33:19.070570 1284045 cri.go:89] found id: "c8f4c76b52ea3dbdf50b9feeeef868087ebe9f8d3a8596d5c5b2c3f5fe5faad5"
	I1018 08:33:19.070573 1284045 cri.go:89] found id: "d3d6b4b5a780c831e4df944a01c79fc26b915752ab82f96bfa50a8db6eeb10ca"
	I1018 08:33:19.070578 1284045 cri.go:89] found id: "b5b1a3ea57732699826bd43ca767e85f48ee04112206faf1a4a459e8820cf3d8"
	I1018 08:33:19.070584 1284045 cri.go:89] found id: "3d8f28771c74beb97c58d5f8c30b538ff9c212922906d7c5984e69d87dfde8c8"
	I1018 08:33:19.070587 1284045 cri.go:89] found id: "c60014395decc7e3a08e95d661ec9973575293fea6d721129b4f610cd9045de2"
	I1018 08:33:19.070591 1284045 cri.go:89] found id: ""
	I1018 08:33:19.070638 1284045 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 08:33:19.087970 1284045 out.go:203] 
	W1018 08:33:19.091215 1284045 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:33:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:33:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 08:33:19.091311 1284045 out.go:285] * 
	* 
	W1018 08:33:19.100897 1284045 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 08:33:19.104127 1284045 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-718596 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.31s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.38s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 5.179854ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-qkx7f" [036adf3b-4b77-4d0a-91c5-bb194f6dd6fc] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003839638s
addons_test.go:463: (dbg) Run:  kubectl --context addons-718596 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-718596 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-718596 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (286.246628ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 08:33:13.560719 1283897 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:33:13.562218 1283897 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:33:13.562261 1283897 out.go:374] Setting ErrFile to fd 2...
	I1018 08:33:13.562268 1283897 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:33:13.562591 1283897 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 08:33:13.563052 1283897 mustload.go:65] Loading cluster: addons-718596
	I1018 08:33:13.563485 1283897 config.go:182] Loaded profile config "addons-718596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:33:13.563504 1283897 addons.go:606] checking whether the cluster is paused
	I1018 08:33:13.563645 1283897 config.go:182] Loaded profile config "addons-718596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:33:13.563679 1283897 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:33:13.564212 1283897 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:33:13.587580 1283897 ssh_runner.go:195] Run: systemctl --version
	I1018 08:33:13.587655 1283897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:33:13.614172 1283897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:33:13.726280 1283897 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 08:33:13.726368 1283897 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 08:33:13.756540 1283897 cri.go:89] found id: "f2ba69481cca4d0f5a25c5beaad63e76454385180b30477c08c5dd39ec11f960"
	I1018 08:33:13.756561 1283897 cri.go:89] found id: "fee77718765ce01862d3826830d1cd1b63df4294ae5007413d147c5b6f353493"
	I1018 08:33:13.756566 1283897 cri.go:89] found id: "2b38f5de44735a72df017a6ba528430e174db28f03821d00d635d4b6bf461eb9"
	I1018 08:33:13.756571 1283897 cri.go:89] found id: "60f3656a31e3365232ea3a50fda1e9925de06d7f546e566ed63caa3b6ddc6a26"
	I1018 08:33:13.756575 1283897 cri.go:89] found id: "69416bfe918f8c0355a6e848e7c20cc2a0644f592e4af4ce241f48fa38460e38"
	I1018 08:33:13.756579 1283897 cri.go:89] found id: "28a70a60eb5fbd7784b4dc8b229decc07e1ead52bd5c852f7c9fdb71b87298e6"
	I1018 08:33:13.756582 1283897 cri.go:89] found id: "9b6001d8d045befc23b48c768a68764d88dbf272340c086878656541cf475eae"
	I1018 08:33:13.756585 1283897 cri.go:89] found id: "20dde0b8d894a22ef9350e1d77e8944af9e5892634e205a207e3b8f888c608b1"
	I1018 08:33:13.756588 1283897 cri.go:89] found id: "1a2a1784a32edcacf12954ced37433e7d2a873d907689eb4a650272d4227e914"
	I1018 08:33:13.756597 1283897 cri.go:89] found id: "41255395d4ccf30b4f83f7286e24fddc9877529056e3b6550176417a18c6ec37"
	I1018 08:33:13.756607 1283897 cri.go:89] found id: "df452f4c1f840a0419fcac5bf2f09de27afcb9908345e644113dc8d6933a8ea1"
	I1018 08:33:13.756610 1283897 cri.go:89] found id: "b7afa4a4426cb6b1ac0da51b0491d588ab2af4410fd28370bf8fa39172e5f813"
	I1018 08:33:13.756614 1283897 cri.go:89] found id: "73bba38c93d1dd848863605a0e040b6cc93f6c2ae44da4b521ffd096fc0e7cef"
	I1018 08:33:13.756617 1283897 cri.go:89] found id: "3341cb4941ef2d1b3b62f6350adf989a06c0a11fdca10b1d2a4d0be1a73e4dac"
	I1018 08:33:13.756621 1283897 cri.go:89] found id: "af3c01fc17b1cf65bfc660209278c4a4b5768989098968a12a530d54cf2e3e99"
	I1018 08:33:13.756628 1283897 cri.go:89] found id: "21a4115e68c1dc6a33076f076e3d38d44c1bd958e69b4c3ac7531a57eb042d50"
	I1018 08:33:13.756634 1283897 cri.go:89] found id: "8a8b73b00b16ea37452d2de32d3703276b55fbacd22ec17ae9cc096aec490df7"
	I1018 08:33:13.756639 1283897 cri.go:89] found id: "c13caa4e33e4b6e47382b70365473a8c332ce93c8e5eeb30bd76b72c8a77d787"
	I1018 08:33:13.756642 1283897 cri.go:89] found id: "c8f4c76b52ea3dbdf50b9feeeef868087ebe9f8d3a8596d5c5b2c3f5fe5faad5"
	I1018 08:33:13.756646 1283897 cri.go:89] found id: "d3d6b4b5a780c831e4df944a01c79fc26b915752ab82f96bfa50a8db6eeb10ca"
	I1018 08:33:13.756650 1283897 cri.go:89] found id: "b5b1a3ea57732699826bd43ca767e85f48ee04112206faf1a4a459e8820cf3d8"
	I1018 08:33:13.756653 1283897 cri.go:89] found id: "3d8f28771c74beb97c58d5f8c30b538ff9c212922906d7c5984e69d87dfde8c8"
	I1018 08:33:13.756656 1283897 cri.go:89] found id: "c60014395decc7e3a08e95d661ec9973575293fea6d721129b4f610cd9045de2"
	I1018 08:33:13.756659 1283897 cri.go:89] found id: ""
	I1018 08:33:13.756710 1283897 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 08:33:13.772010 1283897 out.go:203] 
	W1018 08:33:13.774899 1283897 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:33:13Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:33:13Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 08:33:13.774928 1283897 out.go:285] * 
	* 
	W1018 08:33:13.783944 1283897 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 08:33:13.786681 1283897 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-718596 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.38s)

                                                
                                    
x
+
TestAddons/parallel/CSI (58.74s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1018 08:32:55.418147 1276097 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1018 08:32:55.423282 1276097 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1018 08:32:55.423311 1276097 kapi.go:107] duration metric: took 5.183817ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 5.193737ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-718596 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718596 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-718596 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [9521f832-b782-4a92-a837-5869f801df63] Pending
helpers_test.go:352: "task-pv-pod" [9521f832-b782-4a92-a837-5869f801df63] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [9521f832-b782-4a92-a837-5869f801df63] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.002899775s
addons_test.go:572: (dbg) Run:  kubectl --context addons-718596 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-718596 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-718596 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-718596 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-718596 delete pod task-pv-pod: (1.107158877s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-718596 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-718596 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-718596 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [4e0efe77-e399-47c0-beed-3f6440468aca] Pending
helpers_test.go:352: "task-pv-pod-restore" [4e0efe77-e399-47c0-beed-3f6440468aca] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [4e0efe77-e399-47c0-beed-3f6440468aca] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003282743s
addons_test.go:614: (dbg) Run:  kubectl --context addons-718596 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-718596 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-718596 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-718596 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-718596 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (255.812851ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 08:33:53.650985 1284921 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:33:53.652400 1284921 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:33:53.652443 1284921 out.go:374] Setting ErrFile to fd 2...
	I1018 08:33:53.652467 1284921 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:33:53.652794 1284921 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 08:33:53.653141 1284921 mustload.go:65] Loading cluster: addons-718596
	I1018 08:33:53.653620 1284921 config.go:182] Loaded profile config "addons-718596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:33:53.653659 1284921 addons.go:606] checking whether the cluster is paused
	I1018 08:33:53.653812 1284921 config.go:182] Loaded profile config "addons-718596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:33:53.653846 1284921 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:33:53.654327 1284921 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:33:53.672151 1284921 ssh_runner.go:195] Run: systemctl --version
	I1018 08:33:53.672204 1284921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:33:53.689319 1284921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:33:53.790426 1284921 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 08:33:53.790514 1284921 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 08:33:53.820164 1284921 cri.go:89] found id: "f2ba69481cca4d0f5a25c5beaad63e76454385180b30477c08c5dd39ec11f960"
	I1018 08:33:53.820185 1284921 cri.go:89] found id: "fee77718765ce01862d3826830d1cd1b63df4294ae5007413d147c5b6f353493"
	I1018 08:33:53.820190 1284921 cri.go:89] found id: "2b38f5de44735a72df017a6ba528430e174db28f03821d00d635d4b6bf461eb9"
	I1018 08:33:53.820197 1284921 cri.go:89] found id: "60f3656a31e3365232ea3a50fda1e9925de06d7f546e566ed63caa3b6ddc6a26"
	I1018 08:33:53.820200 1284921 cri.go:89] found id: "69416bfe918f8c0355a6e848e7c20cc2a0644f592e4af4ce241f48fa38460e38"
	I1018 08:33:53.820204 1284921 cri.go:89] found id: "28a70a60eb5fbd7784b4dc8b229decc07e1ead52bd5c852f7c9fdb71b87298e6"
	I1018 08:33:53.820207 1284921 cri.go:89] found id: "9b6001d8d045befc23b48c768a68764d88dbf272340c086878656541cf475eae"
	I1018 08:33:53.820210 1284921 cri.go:89] found id: "20dde0b8d894a22ef9350e1d77e8944af9e5892634e205a207e3b8f888c608b1"
	I1018 08:33:53.820213 1284921 cri.go:89] found id: "1a2a1784a32edcacf12954ced37433e7d2a873d907689eb4a650272d4227e914"
	I1018 08:33:53.820219 1284921 cri.go:89] found id: "41255395d4ccf30b4f83f7286e24fddc9877529056e3b6550176417a18c6ec37"
	I1018 08:33:53.820223 1284921 cri.go:89] found id: "df452f4c1f840a0419fcac5bf2f09de27afcb9908345e644113dc8d6933a8ea1"
	I1018 08:33:53.820233 1284921 cri.go:89] found id: "b7afa4a4426cb6b1ac0da51b0491d588ab2af4410fd28370bf8fa39172e5f813"
	I1018 08:33:53.820241 1284921 cri.go:89] found id: "73bba38c93d1dd848863605a0e040b6cc93f6c2ae44da4b521ffd096fc0e7cef"
	I1018 08:33:53.820244 1284921 cri.go:89] found id: "3341cb4941ef2d1b3b62f6350adf989a06c0a11fdca10b1d2a4d0be1a73e4dac"
	I1018 08:33:53.820248 1284921 cri.go:89] found id: "af3c01fc17b1cf65bfc660209278c4a4b5768989098968a12a530d54cf2e3e99"
	I1018 08:33:53.820253 1284921 cri.go:89] found id: "21a4115e68c1dc6a33076f076e3d38d44c1bd958e69b4c3ac7531a57eb042d50"
	I1018 08:33:53.820259 1284921 cri.go:89] found id: "8a8b73b00b16ea37452d2de32d3703276b55fbacd22ec17ae9cc096aec490df7"
	I1018 08:33:53.820264 1284921 cri.go:89] found id: "c13caa4e33e4b6e47382b70365473a8c332ce93c8e5eeb30bd76b72c8a77d787"
	I1018 08:33:53.820268 1284921 cri.go:89] found id: "c8f4c76b52ea3dbdf50b9feeeef868087ebe9f8d3a8596d5c5b2c3f5fe5faad5"
	I1018 08:33:53.820271 1284921 cri.go:89] found id: "d3d6b4b5a780c831e4df944a01c79fc26b915752ab82f96bfa50a8db6eeb10ca"
	I1018 08:33:53.820276 1284921 cri.go:89] found id: "b5b1a3ea57732699826bd43ca767e85f48ee04112206faf1a4a459e8820cf3d8"
	I1018 08:33:53.820282 1284921 cri.go:89] found id: "3d8f28771c74beb97c58d5f8c30b538ff9c212922906d7c5984e69d87dfde8c8"
	I1018 08:33:53.820285 1284921 cri.go:89] found id: "c60014395decc7e3a08e95d661ec9973575293fea6d721129b4f610cd9045de2"
	I1018 08:33:53.820289 1284921 cri.go:89] found id: ""
	I1018 08:33:53.820347 1284921 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 08:33:53.835018 1284921 out.go:203] 
	W1018 08:33:53.838065 1284921 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:33:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:33:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 08:33:53.838092 1284921 out.go:285] * 
	* 
	W1018 08:33:53.847123 1284921 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 08:33:53.850122 1284921 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-718596 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-718596 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-718596 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (296.457028ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 08:33:53.925428 1284965 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:33:53.926677 1284965 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:33:53.926720 1284965 out.go:374] Setting ErrFile to fd 2...
	I1018 08:33:53.926742 1284965 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:33:53.927046 1284965 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 08:33:53.927398 1284965 mustload.go:65] Loading cluster: addons-718596
	I1018 08:33:53.927875 1284965 config.go:182] Loaded profile config "addons-718596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:33:53.927933 1284965 addons.go:606] checking whether the cluster is paused
	I1018 08:33:53.928093 1284965 config.go:182] Loaded profile config "addons-718596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:33:53.928139 1284965 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:33:53.928741 1284965 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:33:53.948382 1284965 ssh_runner.go:195] Run: systemctl --version
	I1018 08:33:53.948439 1284965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:33:53.968673 1284965 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:33:54.078845 1284965 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 08:33:54.079002 1284965 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 08:33:54.114561 1284965 cri.go:89] found id: "f2ba69481cca4d0f5a25c5beaad63e76454385180b30477c08c5dd39ec11f960"
	I1018 08:33:54.114624 1284965 cri.go:89] found id: "fee77718765ce01862d3826830d1cd1b63df4294ae5007413d147c5b6f353493"
	I1018 08:33:54.114632 1284965 cri.go:89] found id: "2b38f5de44735a72df017a6ba528430e174db28f03821d00d635d4b6bf461eb9"
	I1018 08:33:54.114636 1284965 cri.go:89] found id: "60f3656a31e3365232ea3a50fda1e9925de06d7f546e566ed63caa3b6ddc6a26"
	I1018 08:33:54.114639 1284965 cri.go:89] found id: "69416bfe918f8c0355a6e848e7c20cc2a0644f592e4af4ce241f48fa38460e38"
	I1018 08:33:54.114643 1284965 cri.go:89] found id: "28a70a60eb5fbd7784b4dc8b229decc07e1ead52bd5c852f7c9fdb71b87298e6"
	I1018 08:33:54.114646 1284965 cri.go:89] found id: "9b6001d8d045befc23b48c768a68764d88dbf272340c086878656541cf475eae"
	I1018 08:33:54.114649 1284965 cri.go:89] found id: "20dde0b8d894a22ef9350e1d77e8944af9e5892634e205a207e3b8f888c608b1"
	I1018 08:33:54.114652 1284965 cri.go:89] found id: "1a2a1784a32edcacf12954ced37433e7d2a873d907689eb4a650272d4227e914"
	I1018 08:33:54.114658 1284965 cri.go:89] found id: "41255395d4ccf30b4f83f7286e24fddc9877529056e3b6550176417a18c6ec37"
	I1018 08:33:54.114662 1284965 cri.go:89] found id: "df452f4c1f840a0419fcac5bf2f09de27afcb9908345e644113dc8d6933a8ea1"
	I1018 08:33:54.114665 1284965 cri.go:89] found id: "b7afa4a4426cb6b1ac0da51b0491d588ab2af4410fd28370bf8fa39172e5f813"
	I1018 08:33:54.114668 1284965 cri.go:89] found id: "73bba38c93d1dd848863605a0e040b6cc93f6c2ae44da4b521ffd096fc0e7cef"
	I1018 08:33:54.114671 1284965 cri.go:89] found id: "3341cb4941ef2d1b3b62f6350adf989a06c0a11fdca10b1d2a4d0be1a73e4dac"
	I1018 08:33:54.114674 1284965 cri.go:89] found id: "af3c01fc17b1cf65bfc660209278c4a4b5768989098968a12a530d54cf2e3e99"
	I1018 08:33:54.114679 1284965 cri.go:89] found id: "21a4115e68c1dc6a33076f076e3d38d44c1bd958e69b4c3ac7531a57eb042d50"
	I1018 08:33:54.114682 1284965 cri.go:89] found id: "8a8b73b00b16ea37452d2de32d3703276b55fbacd22ec17ae9cc096aec490df7"
	I1018 08:33:54.114685 1284965 cri.go:89] found id: "c13caa4e33e4b6e47382b70365473a8c332ce93c8e5eeb30bd76b72c8a77d787"
	I1018 08:33:54.114688 1284965 cri.go:89] found id: "c8f4c76b52ea3dbdf50b9feeeef868087ebe9f8d3a8596d5c5b2c3f5fe5faad5"
	I1018 08:33:54.114691 1284965 cri.go:89] found id: "d3d6b4b5a780c831e4df944a01c79fc26b915752ab82f96bfa50a8db6eeb10ca"
	I1018 08:33:54.114697 1284965 cri.go:89] found id: "b5b1a3ea57732699826bd43ca767e85f48ee04112206faf1a4a459e8820cf3d8"
	I1018 08:33:54.114700 1284965 cri.go:89] found id: "3d8f28771c74beb97c58d5f8c30b538ff9c212922906d7c5984e69d87dfde8c8"
	I1018 08:33:54.114703 1284965 cri.go:89] found id: "c60014395decc7e3a08e95d661ec9973575293fea6d721129b4f610cd9045de2"
	I1018 08:33:54.114706 1284965 cri.go:89] found id: ""
	I1018 08:33:54.114753 1284965 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 08:33:54.131219 1284965 out.go:203] 
	W1018 08:33:54.134666 1284965 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:33:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:33:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 08:33:54.134704 1284965 out.go:285] * 
	* 
	W1018 08:33:54.143657 1284965 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 08:33:54.146681 1284965 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-718596 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (58.74s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-718596 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-718596 --alsologtostderr -v=1: exit status 11 (274.501277ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 08:32:52.213274 1283092 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:32:52.214535 1283092 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:32:52.214552 1283092 out.go:374] Setting ErrFile to fd 2...
	I1018 08:32:52.214557 1283092 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:32:52.214806 1283092 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 08:32:52.215142 1283092 mustload.go:65] Loading cluster: addons-718596
	I1018 08:32:52.215500 1283092 config.go:182] Loaded profile config "addons-718596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:32:52.215516 1283092 addons.go:606] checking whether the cluster is paused
	I1018 08:32:52.215628 1283092 config.go:182] Loaded profile config "addons-718596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:32:52.215648 1283092 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:32:52.216215 1283092 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:32:52.242726 1283092 ssh_runner.go:195] Run: systemctl --version
	I1018 08:32:52.242787 1283092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:32:52.263534 1283092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:32:52.366358 1283092 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 08:32:52.366475 1283092 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 08:32:52.402312 1283092 cri.go:89] found id: "f2ba69481cca4d0f5a25c5beaad63e76454385180b30477c08c5dd39ec11f960"
	I1018 08:32:52.402345 1283092 cri.go:89] found id: "fee77718765ce01862d3826830d1cd1b63df4294ae5007413d147c5b6f353493"
	I1018 08:32:52.402350 1283092 cri.go:89] found id: "2b38f5de44735a72df017a6ba528430e174db28f03821d00d635d4b6bf461eb9"
	I1018 08:32:52.402354 1283092 cri.go:89] found id: "60f3656a31e3365232ea3a50fda1e9925de06d7f546e566ed63caa3b6ddc6a26"
	I1018 08:32:52.402357 1283092 cri.go:89] found id: "69416bfe918f8c0355a6e848e7c20cc2a0644f592e4af4ce241f48fa38460e38"
	I1018 08:32:52.402360 1283092 cri.go:89] found id: "28a70a60eb5fbd7784b4dc8b229decc07e1ead52bd5c852f7c9fdb71b87298e6"
	I1018 08:32:52.402363 1283092 cri.go:89] found id: "9b6001d8d045befc23b48c768a68764d88dbf272340c086878656541cf475eae"
	I1018 08:32:52.402366 1283092 cri.go:89] found id: "20dde0b8d894a22ef9350e1d77e8944af9e5892634e205a207e3b8f888c608b1"
	I1018 08:32:52.402370 1283092 cri.go:89] found id: "1a2a1784a32edcacf12954ced37433e7d2a873d907689eb4a650272d4227e914"
	I1018 08:32:52.402377 1283092 cri.go:89] found id: "41255395d4ccf30b4f83f7286e24fddc9877529056e3b6550176417a18c6ec37"
	I1018 08:32:52.402381 1283092 cri.go:89] found id: "df452f4c1f840a0419fcac5bf2f09de27afcb9908345e644113dc8d6933a8ea1"
	I1018 08:32:52.402385 1283092 cri.go:89] found id: "b7afa4a4426cb6b1ac0da51b0491d588ab2af4410fd28370bf8fa39172e5f813"
	I1018 08:32:52.402388 1283092 cri.go:89] found id: "73bba38c93d1dd848863605a0e040b6cc93f6c2ae44da4b521ffd096fc0e7cef"
	I1018 08:32:52.402391 1283092 cri.go:89] found id: "3341cb4941ef2d1b3b62f6350adf989a06c0a11fdca10b1d2a4d0be1a73e4dac"
	I1018 08:32:52.402401 1283092 cri.go:89] found id: "af3c01fc17b1cf65bfc660209278c4a4b5768989098968a12a530d54cf2e3e99"
	I1018 08:32:52.402410 1283092 cri.go:89] found id: "21a4115e68c1dc6a33076f076e3d38d44c1bd958e69b4c3ac7531a57eb042d50"
	I1018 08:32:52.402414 1283092 cri.go:89] found id: "8a8b73b00b16ea37452d2de32d3703276b55fbacd22ec17ae9cc096aec490df7"
	I1018 08:32:52.402419 1283092 cri.go:89] found id: "c13caa4e33e4b6e47382b70365473a8c332ce93c8e5eeb30bd76b72c8a77d787"
	I1018 08:32:52.402422 1283092 cri.go:89] found id: "c8f4c76b52ea3dbdf50b9feeeef868087ebe9f8d3a8596d5c5b2c3f5fe5faad5"
	I1018 08:32:52.402425 1283092 cri.go:89] found id: "d3d6b4b5a780c831e4df944a01c79fc26b915752ab82f96bfa50a8db6eeb10ca"
	I1018 08:32:52.402430 1283092 cri.go:89] found id: "b5b1a3ea57732699826bd43ca767e85f48ee04112206faf1a4a459e8820cf3d8"
	I1018 08:32:52.402432 1283092 cri.go:89] found id: "3d8f28771c74beb97c58d5f8c30b538ff9c212922906d7c5984e69d87dfde8c8"
	I1018 08:32:52.402436 1283092 cri.go:89] found id: "c60014395decc7e3a08e95d661ec9973575293fea6d721129b4f610cd9045de2"
	I1018 08:32:52.402443 1283092 cri.go:89] found id: ""
	I1018 08:32:52.402504 1283092 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 08:32:52.418389 1283092 out.go:203] 
	W1018 08:32:52.421318 1283092 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:32:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:32:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 08:32:52.421342 1283092 out.go:285] * 
	* 
	W1018 08:32:52.430389 1283092 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 08:32:52.433362 1283092 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-718596 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-718596
helpers_test.go:243: (dbg) docker inspect addons-718596:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1da112bdf57ccc7d8da6bcbee61a9f3bab5ab8a465f139996599b3bb8d462292",
	        "Created": "2025-10-18T08:30:14.15517958Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1277258,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T08:30:14.221505486Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/1da112bdf57ccc7d8da6bcbee61a9f3bab5ab8a465f139996599b3bb8d462292/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1da112bdf57ccc7d8da6bcbee61a9f3bab5ab8a465f139996599b3bb8d462292/hostname",
	        "HostsPath": "/var/lib/docker/containers/1da112bdf57ccc7d8da6bcbee61a9f3bab5ab8a465f139996599b3bb8d462292/hosts",
	        "LogPath": "/var/lib/docker/containers/1da112bdf57ccc7d8da6bcbee61a9f3bab5ab8a465f139996599b3bb8d462292/1da112bdf57ccc7d8da6bcbee61a9f3bab5ab8a465f139996599b3bb8d462292-json.log",
	        "Name": "/addons-718596",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-718596:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-718596",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1da112bdf57ccc7d8da6bcbee61a9f3bab5ab8a465f139996599b3bb8d462292",
	                "LowerDir": "/var/lib/docker/overlay2/46018aad8cff278750f0c63dd3e2338fc02fc1faf3fc20e510086c0eb07c6cb6-init/diff:/var/lib/docker/overlay2/60519750c2737db0dfdb37adf468f6414129c2b5dcb7218ea59afbc63bfd537f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/46018aad8cff278750f0c63dd3e2338fc02fc1faf3fc20e510086c0eb07c6cb6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/46018aad8cff278750f0c63dd3e2338fc02fc1faf3fc20e510086c0eb07c6cb6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/46018aad8cff278750f0c63dd3e2338fc02fc1faf3fc20e510086c0eb07c6cb6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-718596",
	                "Source": "/var/lib/docker/volumes/addons-718596/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-718596",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-718596",
	                "name.minikube.sigs.k8s.io": "addons-718596",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1d4668ccbf9641fa9255a434a34f258857ac42e41520e21c0fa31fd9f4cf7fa7",
	            "SandboxKey": "/var/run/docker/netns/1d4668ccbf96",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34591"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34592"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34595"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34593"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34594"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-718596": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "82:68:f3:4f:66:5e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0e521cd8786e64916a2fa82c7e1b4ef4883e53245ebc0e9edab985ff6e857cb1",
	                    "EndpointID": "016be47b442e12f6291f2c4f8dc41a6222e1d42ca95fb4588f6e4964981c89b2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-718596",
	                        "1da112bdf57c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-718596 -n addons-718596
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-718596 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-718596 logs -n 25: (1.451422436s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-395497 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-395497   │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │ 18 Oct 25 08:29 UTC │
	│ delete  │ -p download-only-395497                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-395497   │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │ 18 Oct 25 08:29 UTC │
	│ start   │ -o=json --download-only -p download-only-387437 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-387437   │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │ 18 Oct 25 08:29 UTC │
	│ delete  │ -p download-only-387437                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-387437   │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │ 18 Oct 25 08:29 UTC │
	│ delete  │ -p download-only-395497                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-395497   │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │ 18 Oct 25 08:29 UTC │
	│ delete  │ -p download-only-387437                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-387437   │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │ 18 Oct 25 08:29 UTC │
	│ start   │ --download-only -p download-docker-695796 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-695796 │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │                     │
	│ delete  │ -p download-docker-695796                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-695796 │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │ 18 Oct 25 08:29 UTC │
	│ start   │ --download-only -p binary-mirror-375009 --alsologtostderr --binary-mirror http://127.0.0.1:42419 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-375009   │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │                     │
	│ delete  │ -p binary-mirror-375009                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-375009   │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │ 18 Oct 25 08:29 UTC │
	│ addons  │ enable dashboard -p addons-718596                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-718596          │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │                     │
	│ addons  │ disable dashboard -p addons-718596                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-718596          │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │                     │
	│ start   │ -p addons-718596 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-718596          │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │ 18 Oct 25 08:32 UTC │
	│ addons  │ addons-718596 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-718596          │ jenkins │ v1.37.0 │ 18 Oct 25 08:32 UTC │                     │
	│ addons  │ addons-718596 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-718596          │ jenkins │ v1.37.0 │ 18 Oct 25 08:32 UTC │                     │
	│ addons  │ enable headlamp -p addons-718596 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-718596          │ jenkins │ v1.37.0 │ 18 Oct 25 08:32 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 08:29:47
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 08:29:47.349632 1276853 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:29:47.349834 1276853 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:29:47.349865 1276853 out.go:374] Setting ErrFile to fd 2...
	I1018 08:29:47.349884 1276853 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:29:47.350258 1276853 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 08:29:47.350877 1276853 out.go:368] Setting JSON to false
	I1018 08:29:47.351861 1276853 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":36735,"bootTime":1760739453,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1018 08:29:47.351930 1276853 start.go:141] virtualization:  
	I1018 08:29:47.355179 1276853 out.go:179] * [addons-718596] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 08:29:47.358924 1276853 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 08:29:47.359079 1276853 notify.go:220] Checking for updates...
	I1018 08:29:47.364713 1276853 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 08:29:47.367577 1276853 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 08:29:47.370362 1276853 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-1274243/.minikube
	I1018 08:29:47.373252 1276853 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 08:29:47.376017 1276853 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 08:29:47.378979 1276853 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 08:29:47.409945 1276853 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 08:29:47.410106 1276853 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 08:29:47.464340 1276853 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-18 08:29:47.455147592 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 08:29:47.464456 1276853 docker.go:318] overlay module found
	I1018 08:29:47.467520 1276853 out.go:179] * Using the docker driver based on user configuration
	I1018 08:29:47.470234 1276853 start.go:305] selected driver: docker
	I1018 08:29:47.470252 1276853 start.go:925] validating driver "docker" against <nil>
	I1018 08:29:47.470266 1276853 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 08:29:47.471007 1276853 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 08:29:47.523216 1276853 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-18 08:29:47.5145796 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 08:29:47.523374 1276853 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 08:29:47.523602 1276853 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 08:29:47.526516 1276853 out.go:179] * Using Docker driver with root privileges
	I1018 08:29:47.529363 1276853 cni.go:84] Creating CNI manager for ""
	I1018 08:29:47.529459 1276853 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 08:29:47.529526 1276853 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 08:29:47.529619 1276853 start.go:349] cluster config:
	{Name:addons-718596 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-718596 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1018 08:29:47.532779 1276853 out.go:179] * Starting "addons-718596" primary control-plane node in "addons-718596" cluster
	I1018 08:29:47.535572 1276853 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 08:29:47.538415 1276853 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 08:29:47.541217 1276853 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 08:29:47.541271 1276853 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 08:29:47.541298 1276853 cache.go:58] Caching tarball of preloaded images
	I1018 08:29:47.541312 1276853 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 08:29:47.541388 1276853 preload.go:233] Found /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 08:29:47.541398 1276853 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 08:29:47.541727 1276853 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/config.json ...
	I1018 08:29:47.541749 1276853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/config.json: {Name:mka4d001fbaa07ca0818af11df2d956be6ef062b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:29:47.556920 1276853 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1018 08:29:47.557036 1276853 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1018 08:29:47.557055 1276853 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory, skipping pull
	I1018 08:29:47.557061 1276853 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in cache, skipping pull
	I1018 08:29:47.557068 1276853 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1018 08:29:47.557073 1276853 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from local cache
	I1018 08:30:05.781930 1276853 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from cached tarball
	I1018 08:30:05.781971 1276853 cache.go:232] Successfully downloaded all kic artifacts
	I1018 08:30:05.782002 1276853 start.go:360] acquireMachinesLock for addons-718596: {Name:mk7bf7588de7d6bcca70e234e6145d68c8ec74e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 08:30:05.782125 1276853 start.go:364] duration metric: took 98.992µs to acquireMachinesLock for "addons-718596"
	I1018 08:30:05.782157 1276853 start.go:93] Provisioning new machine with config: &{Name:addons-718596 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-718596 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 08:30:05.782225 1276853 start.go:125] createHost starting for "" (driver="docker")
	I1018 08:30:05.785695 1276853 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1018 08:30:05.785931 1276853 start.go:159] libmachine.API.Create for "addons-718596" (driver="docker")
	I1018 08:30:05.785984 1276853 client.go:168] LocalClient.Create starting
	I1018 08:30:05.786109 1276853 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem
	I1018 08:30:06.440988 1276853 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem
	I1018 08:30:07.431950 1276853 cli_runner.go:164] Run: docker network inspect addons-718596 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 08:30:07.447582 1276853 cli_runner.go:211] docker network inspect addons-718596 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 08:30:07.447661 1276853 network_create.go:284] running [docker network inspect addons-718596] to gather additional debugging logs...
	I1018 08:30:07.447680 1276853 cli_runner.go:164] Run: docker network inspect addons-718596
	W1018 08:30:07.462764 1276853 cli_runner.go:211] docker network inspect addons-718596 returned with exit code 1
	I1018 08:30:07.462794 1276853 network_create.go:287] error running [docker network inspect addons-718596]: docker network inspect addons-718596: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-718596 not found
	I1018 08:30:07.462809 1276853 network_create.go:289] output of [docker network inspect addons-718596]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-718596 not found
	
	** /stderr **
	I1018 08:30:07.462922 1276853 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 08:30:07.479341 1276853 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001d74100}
	I1018 08:30:07.479386 1276853 network_create.go:124] attempt to create docker network addons-718596 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1018 08:30:07.479448 1276853 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-718596 addons-718596
	I1018 08:30:07.540044 1276853 network_create.go:108] docker network addons-718596 192.168.49.0/24 created
	I1018 08:30:07.540076 1276853 kic.go:121] calculated static IP "192.168.49.2" for the "addons-718596" container
	I1018 08:30:07.540161 1276853 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 08:30:07.556261 1276853 cli_runner.go:164] Run: docker volume create addons-718596 --label name.minikube.sigs.k8s.io=addons-718596 --label created_by.minikube.sigs.k8s.io=true
	I1018 08:30:07.573188 1276853 oci.go:103] Successfully created a docker volume addons-718596
	I1018 08:30:07.573279 1276853 cli_runner.go:164] Run: docker run --rm --name addons-718596-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-718596 --entrypoint /usr/bin/test -v addons-718596:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 08:30:09.680086 1276853 cli_runner.go:217] Completed: docker run --rm --name addons-718596-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-718596 --entrypoint /usr/bin/test -v addons-718596:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib: (2.106767691s)
	I1018 08:30:09.680121 1276853 oci.go:107] Successfully prepared a docker volume addons-718596
	I1018 08:30:09.680159 1276853 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 08:30:09.680177 1276853 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 08:30:09.680238 1276853 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-718596:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 08:30:14.075644 1276853 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-718596:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.395372656s)
	I1018 08:30:14.075676 1276853 kic.go:203] duration metric: took 4.395495868s to extract preloaded images to volume ...
	W1018 08:30:14.075822 1276853 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 08:30:14.075963 1276853 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 08:30:14.142486 1276853 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-718596 --name addons-718596 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-718596 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-718596 --network addons-718596 --ip 192.168.49.2 --volume addons-718596:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 08:30:14.448782 1276853 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Running}}
	I1018 08:30:14.470203 1276853 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:30:14.491407 1276853 cli_runner.go:164] Run: docker exec addons-718596 stat /var/lib/dpkg/alternatives/iptables
	I1018 08:30:14.545159 1276853 oci.go:144] the created container "addons-718596" has a running status.
	I1018 08:30:14.545185 1276853 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa...
	I1018 08:30:15.457300 1276853 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 08:30:15.479457 1276853 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:30:15.496956 1276853 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 08:30:15.496975 1276853 kic_runner.go:114] Args: [docker exec --privileged addons-718596 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 08:30:15.548493 1276853 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:30:15.567685 1276853 machine.go:93] provisionDockerMachine start ...
	I1018 08:30:15.567793 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:15.587494 1276853 main.go:141] libmachine: Using SSH client type: native
	I1018 08:30:15.588502 1276853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34591 <nil> <nil>}
	I1018 08:30:15.588519 1276853 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 08:30:15.743372 1276853 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-718596
	
	I1018 08:30:15.743397 1276853 ubuntu.go:182] provisioning hostname "addons-718596"
	I1018 08:30:15.743464 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:15.760833 1276853 main.go:141] libmachine: Using SSH client type: native
	I1018 08:30:15.761156 1276853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34591 <nil> <nil>}
	I1018 08:30:15.761174 1276853 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-718596 && echo "addons-718596" | sudo tee /etc/hostname
	I1018 08:30:15.921056 1276853 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-718596
	
	I1018 08:30:15.921149 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:15.938832 1276853 main.go:141] libmachine: Using SSH client type: native
	I1018 08:30:15.939147 1276853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34591 <nil> <nil>}
	I1018 08:30:15.939168 1276853 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-718596' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-718596/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-718596' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 08:30:16.088965 1276853 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 08:30:16.088992 1276853 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-1274243/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-1274243/.minikube}
	I1018 08:30:16.089021 1276853 ubuntu.go:190] setting up certificates
	I1018 08:30:16.089032 1276853 provision.go:84] configureAuth start
	I1018 08:30:16.089092 1276853 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-718596
	I1018 08:30:16.105242 1276853 provision.go:143] copyHostCerts
	I1018 08:30:16.105328 1276853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem (1078 bytes)
	I1018 08:30:16.105466 1276853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem (1123 bytes)
	I1018 08:30:16.105535 1276853 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem (1675 bytes)
	I1018 08:30:16.105590 1276853 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem org=jenkins.addons-718596 san=[127.0.0.1 192.168.49.2 addons-718596 localhost minikube]
	I1018 08:30:16.577032 1276853 provision.go:177] copyRemoteCerts
	I1018 08:30:16.577097 1276853 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 08:30:16.577137 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:16.602794 1276853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:30:16.703417 1276853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 08:30:16.720870 1276853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 08:30:16.738969 1276853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I1018 08:30:16.755944 1276853 provision.go:87] duration metric: took 666.898547ms to configureAuth
	I1018 08:30:16.756006 1276853 ubuntu.go:206] setting minikube options for container-runtime
	I1018 08:30:16.756191 1276853 config.go:182] Loaded profile config "addons-718596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:30:16.756298 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:16.777012 1276853 main.go:141] libmachine: Using SSH client type: native
	I1018 08:30:16.777340 1276853 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34591 <nil> <nil>}
	I1018 08:30:16.777361 1276853 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 08:30:17.034090 1276853 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 08:30:17.034178 1276853 machine.go:96] duration metric: took 1.466472783s to provisionDockerMachine
	I1018 08:30:17.034203 1276853 client.go:171] duration metric: took 11.248209061s to LocalClient.Create
	I1018 08:30:17.034249 1276853 start.go:167] duration metric: took 11.248319621s to libmachine.API.Create "addons-718596"
	I1018 08:30:17.034276 1276853 start.go:293] postStartSetup for "addons-718596" (driver="docker")
	I1018 08:30:17.034300 1276853 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 08:30:17.034396 1276853 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 08:30:17.034509 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:17.053760 1276853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:30:17.160038 1276853 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 08:30:17.163354 1276853 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 08:30:17.163390 1276853 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 08:30:17.163401 1276853 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-1274243/.minikube/addons for local assets ...
	I1018 08:30:17.163519 1276853 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-1274243/.minikube/files for local assets ...
	I1018 08:30:17.163560 1276853 start.go:296] duration metric: took 129.255685ms for postStartSetup
	I1018 08:30:17.163952 1276853 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-718596
	I1018 08:30:17.181416 1276853 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/config.json ...
	I1018 08:30:17.181707 1276853 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 08:30:17.181766 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:17.199084 1276853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:30:17.304718 1276853 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 08:30:17.309079 1276853 start.go:128] duration metric: took 11.526838396s to createHost
	I1018 08:30:17.309164 1276853 start.go:83] releasing machines lock for "addons-718596", held for 11.527025517s
	I1018 08:30:17.309267 1276853 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-718596
	I1018 08:30:17.325293 1276853 ssh_runner.go:195] Run: cat /version.json
	I1018 08:30:17.325347 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:17.325367 1276853 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 08:30:17.325427 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:17.342916 1276853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:30:17.345295 1276853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:30:17.541415 1276853 ssh_runner.go:195] Run: systemctl --version
	I1018 08:30:17.547535 1276853 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 08:30:17.582099 1276853 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 08:30:17.586301 1276853 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 08:30:17.586368 1276853 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 08:30:17.612596 1276853 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 08:30:17.612616 1276853 start.go:495] detecting cgroup driver to use...
	I1018 08:30:17.612646 1276853 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 08:30:17.612704 1276853 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 08:30:17.630196 1276853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 08:30:17.642870 1276853 docker.go:218] disabling cri-docker service (if available) ...
	I1018 08:30:17.642986 1276853 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 08:30:17.660286 1276853 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 08:30:17.678225 1276853 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 08:30:17.795587 1276853 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 08:30:17.923259 1276853 docker.go:234] disabling docker service ...
	I1018 08:30:17.923344 1276853 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 08:30:17.944527 1276853 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 08:30:17.957667 1276853 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 08:30:18.076889 1276853 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 08:30:18.187472 1276853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 08:30:18.200438 1276853 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 08:30:18.215224 1276853 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 08:30:18.215293 1276853 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:30:18.225000 1276853 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 08:30:18.225073 1276853 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:30:18.233892 1276853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:30:18.242514 1276853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:30:18.251025 1276853 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 08:30:18.259261 1276853 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:30:18.268057 1276853 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:30:18.281362 1276853 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:30:18.290579 1276853 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 08:30:18.298273 1276853 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 08:30:18.305698 1276853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 08:30:18.411952 1276853 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 08:30:18.535124 1276853 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 08:30:18.535279 1276853 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 08:30:18.539298 1276853 start.go:563] Will wait 60s for crictl version
	I1018 08:30:18.539411 1276853 ssh_runner.go:195] Run: which crictl
	I1018 08:30:18.542797 1276853 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 08:30:18.565782 1276853 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 08:30:18.565947 1276853 ssh_runner.go:195] Run: crio --version
	I1018 08:30:18.595054 1276853 ssh_runner.go:195] Run: crio --version
	I1018 08:30:18.626900 1276853 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 08:30:18.629585 1276853 cli_runner.go:164] Run: docker network inspect addons-718596 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 08:30:18.645359 1276853 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 08:30:18.648782 1276853 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 08:30:18.657865 1276853 kubeadm.go:883] updating cluster {Name:addons-718596 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-718596 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 08:30:18.657989 1276853 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 08:30:18.658054 1276853 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 08:30:18.688834 1276853 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 08:30:18.688857 1276853 crio.go:433] Images already preloaded, skipping extraction
	I1018 08:30:18.688912 1276853 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 08:30:18.714008 1276853 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 08:30:18.714031 1276853 cache_images.go:85] Images are preloaded, skipping loading
	I1018 08:30:18.714039 1276853 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1018 08:30:18.714127 1276853 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-718596 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-718596 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 08:30:18.714212 1276853 ssh_runner.go:195] Run: crio config
	I1018 08:30:18.767246 1276853 cni.go:84] Creating CNI manager for ""
	I1018 08:30:18.767269 1276853 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 08:30:18.767288 1276853 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 08:30:18.767319 1276853 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-718596 NodeName:addons-718596 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 08:30:18.767469 1276853 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-718596"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 08:30:18.767552 1276853 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 08:30:18.775296 1276853 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 08:30:18.775366 1276853 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 08:30:18.782828 1276853 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1018 08:30:18.794964 1276853 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 08:30:18.807932 1276853 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1018 08:30:18.820556 1276853 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1018 08:30:18.824035 1276853 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 08:30:18.833814 1276853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 08:30:18.950358 1276853 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 08:30:18.965412 1276853 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596 for IP: 192.168.49.2
	I1018 08:30:18.965443 1276853 certs.go:195] generating shared ca certs ...
	I1018 08:30:18.965460 1276853 certs.go:227] acquiring lock for ca certs: {Name:mk38b7543cc5a8f0209b20a590824c94ad8d0609 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:30:18.965606 1276853 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.key
	I1018 08:30:19.539532 1276853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.crt ...
	I1018 08:30:19.539564 1276853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.crt: {Name:mk14aac60bd0c5732eec7cb3257c85d7c2ed1b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:30:19.539790 1276853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.key ...
	I1018 08:30:19.539806 1276853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.key: {Name:mkdeaaf9a4bd1141ccaf9c64e8f433b86d74556c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:30:19.539918 1276853 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.key
	I1018 08:30:19.931522 1276853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.crt ...
	I1018 08:30:19.931553 1276853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.crt: {Name:mka1c450e0a44ea2f01dd153e2e4b5997f1b2b4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:30:19.931745 1276853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.key ...
	I1018 08:30:19.931758 1276853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.key: {Name:mkc0f689cdc574ec5d286e831e608d80527bb985 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:30:19.931860 1276853 certs.go:257] generating profile certs ...
	I1018 08:30:19.931924 1276853 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/client.key
	I1018 08:30:19.931943 1276853 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/client.crt with IP's: []
	I1018 08:30:20.645020 1276853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/client.crt ...
	I1018 08:30:20.645052 1276853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/client.crt: {Name:mk5a3f63526334e3704b03a78c81701653479538 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:30:20.645240 1276853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/client.key ...
	I1018 08:30:20.645256 1276853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/client.key: {Name:mk6690ee8a90850afad155899a6abb40f48b949a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:30:20.645331 1276853 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/apiserver.key.5ba3ca1a
	I1018 08:30:20.645347 1276853 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/apiserver.crt.5ba3ca1a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1018 08:30:21.104996 1276853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/apiserver.crt.5ba3ca1a ...
	I1018 08:30:21.105026 1276853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/apiserver.crt.5ba3ca1a: {Name:mkd5d34969ee8b2bf2fa41e0d6fba7d1be0451b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:30:21.105211 1276853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/apiserver.key.5ba3ca1a ...
	I1018 08:30:21.105225 1276853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/apiserver.key.5ba3ca1a: {Name:mk42c5900156e6c5c1e92c4f35880e214e106590 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:30:21.105320 1276853 certs.go:382] copying /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/apiserver.crt.5ba3ca1a -> /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/apiserver.crt
	I1018 08:30:21.105402 1276853 certs.go:386] copying /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/apiserver.key.5ba3ca1a -> /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/apiserver.key
	I1018 08:30:21.105458 1276853 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/proxy-client.key
	I1018 08:30:21.105483 1276853 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/proxy-client.crt with IP's: []
	I1018 08:30:21.442761 1276853 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/proxy-client.crt ...
	I1018 08:30:21.442790 1276853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/proxy-client.crt: {Name:mk3c36423478c35b23a45d0a41a38444911aac0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:30:21.442982 1276853 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/proxy-client.key ...
	I1018 08:30:21.442996 1276853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/proxy-client.key: {Name:mk5764ba6408574aaecdb08ee06e0b6dddc0d0ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:30:21.443192 1276853 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 08:30:21.443234 1276853 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem (1078 bytes)
	I1018 08:30:21.443263 1276853 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem (1123 bytes)
	I1018 08:30:21.443301 1276853 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem (1675 bytes)
	I1018 08:30:21.443887 1276853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 08:30:21.461741 1276853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 08:30:21.479455 1276853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 08:30:21.496903 1276853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 08:30:21.514024 1276853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1018 08:30:21.531077 1276853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 08:30:21.547278 1276853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 08:30:21.564183 1276853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 08:30:21.580431 1276853 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 08:30:21.596963 1276853 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 08:30:21.608765 1276853 ssh_runner.go:195] Run: openssl version
	I1018 08:30:21.614821 1276853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 08:30:21.622726 1276853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 08:30:21.626039 1276853 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1018 08:30:21.626095 1276853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 08:30:21.666701 1276853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 08:30:21.674713 1276853 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 08:30:21.677873 1276853 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 08:30:21.677914 1276853 kubeadm.go:400] StartCluster: {Name:addons-718596 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-718596 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 08:30:21.677985 1276853 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 08:30:21.678036 1276853 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 08:30:21.705348 1276853 cri.go:89] found id: ""
	I1018 08:30:21.705488 1276853 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 08:30:21.713820 1276853 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 08:30:21.722113 1276853 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 08:30:21.722175 1276853 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 08:30:21.729614 1276853 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 08:30:21.729638 1276853 kubeadm.go:157] found existing configuration files:
	
	I1018 08:30:21.729706 1276853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 08:30:21.737257 1276853 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 08:30:21.737328 1276853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 08:30:21.744507 1276853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 08:30:21.752022 1276853 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 08:30:21.752126 1276853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 08:30:21.759528 1276853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 08:30:21.766705 1276853 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 08:30:21.766767 1276853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 08:30:21.773677 1276853 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 08:30:21.781136 1276853 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 08:30:21.781255 1276853 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 08:30:21.788577 1276853 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 08:30:21.824222 1276853 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 08:30:21.824288 1276853 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 08:30:21.852327 1276853 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 08:30:21.852443 1276853 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 08:30:21.852512 1276853 kubeadm.go:318] OS: Linux
	I1018 08:30:21.852591 1276853 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 08:30:21.852693 1276853 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1018 08:30:21.852775 1276853 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 08:30:21.852853 1276853 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 08:30:21.852930 1276853 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 08:30:21.853032 1276853 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 08:30:21.853108 1276853 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 08:30:21.853201 1276853 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 08:30:21.853262 1276853 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1018 08:30:21.922996 1276853 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 08:30:21.923167 1276853 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 08:30:21.923285 1276853 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 08:30:21.936272 1276853 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 08:30:21.943234 1276853 out.go:252]   - Generating certificates and keys ...
	I1018 08:30:21.943349 1276853 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 08:30:21.943435 1276853 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 08:30:23.423582 1276853 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 08:30:23.767009 1276853 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 08:30:23.965278 1276853 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 08:30:24.459155 1276853 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 08:30:24.978008 1276853 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 08:30:24.978351 1276853 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-718596 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1018 08:30:25.645134 1276853 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 08:30:25.645413 1276853 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-718596 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1018 08:30:25.723134 1276853 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 08:30:26.433378 1276853 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 08:30:26.957625 1276853 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 08:30:26.957880 1276853 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 08:30:28.379666 1276853 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 08:30:28.513923 1276853 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 08:30:28.997950 1276853 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 08:30:30.268489 1276853 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 08:30:30.519851 1276853 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 08:30:30.520397 1276853 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 08:30:30.523071 1276853 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 08:30:30.526411 1276853 out.go:252]   - Booting up control plane ...
	I1018 08:30:30.526519 1276853 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 08:30:30.526603 1276853 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 08:30:30.526674 1276853 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 08:30:30.542280 1276853 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 08:30:30.542625 1276853 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 08:30:30.550699 1276853 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 08:30:30.551084 1276853 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 08:30:30.551370 1276853 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 08:30:30.678840 1276853 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 08:30:30.678969 1276853 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 08:30:31.684209 1276853 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.004700026s
	I1018 08:30:31.693548 1276853 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 08:30:31.693730 1276853 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1018 08:30:31.693827 1276853 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 08:30:31.694155 1276853 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 08:30:34.564594 1276853 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.870042289s
	I1018 08:30:36.317849 1276853 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.623470697s
	I1018 08:30:38.195349 1276853 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.501289314s
	I1018 08:30:38.218682 1276853 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 08:30:38.234580 1276853 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 08:30:38.253246 1276853 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 08:30:38.253573 1276853 kubeadm.go:318] [mark-control-plane] Marking the node addons-718596 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 08:30:38.270595 1276853 kubeadm.go:318] [bootstrap-token] Using token: einyuv.xotqzq233w49k3mh
	I1018 08:30:38.273708 1276853 out.go:252]   - Configuring RBAC rules ...
	I1018 08:30:38.273853 1276853 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 08:30:38.277976 1276853 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 08:30:38.286441 1276853 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 08:30:38.291243 1276853 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 08:30:38.295452 1276853 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 08:30:38.301441 1276853 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 08:30:38.603962 1276853 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 08:30:39.038580 1276853 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 08:30:39.604157 1276853 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 08:30:39.605583 1276853 kubeadm.go:318] 
	I1018 08:30:39.605655 1276853 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 08:30:39.605660 1276853 kubeadm.go:318] 
	I1018 08:30:39.605736 1276853 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 08:30:39.605741 1276853 kubeadm.go:318] 
	I1018 08:30:39.605767 1276853 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 08:30:39.605826 1276853 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 08:30:39.605875 1276853 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 08:30:39.605880 1276853 kubeadm.go:318] 
	I1018 08:30:39.605934 1276853 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 08:30:39.605939 1276853 kubeadm.go:318] 
	I1018 08:30:39.605986 1276853 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 08:30:39.605991 1276853 kubeadm.go:318] 
	I1018 08:30:39.606042 1276853 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 08:30:39.606117 1276853 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 08:30:39.606184 1276853 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 08:30:39.606188 1276853 kubeadm.go:318] 
	I1018 08:30:39.606271 1276853 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 08:30:39.606347 1276853 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 08:30:39.606352 1276853 kubeadm.go:318] 
	I1018 08:30:39.606441 1276853 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token einyuv.xotqzq233w49k3mh \
	I1018 08:30:39.606544 1276853 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:96f90b4cb8c73febdc9c05bdeac126d36ffd51807995d3aad044a9ec3e33b953 \
	I1018 08:30:39.606564 1276853 kubeadm.go:318] 	--control-plane 
	I1018 08:30:39.606568 1276853 kubeadm.go:318] 
	I1018 08:30:39.606652 1276853 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 08:30:39.606656 1276853 kubeadm.go:318] 
	I1018 08:30:39.606737 1276853 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token einyuv.xotqzq233w49k3mh \
	I1018 08:30:39.606838 1276853 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:96f90b4cb8c73febdc9c05bdeac126d36ffd51807995d3aad044a9ec3e33b953 
	I1018 08:30:39.609192 1276853 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1018 08:30:39.609431 1276853 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1018 08:30:39.609540 1276853 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 08:30:39.609555 1276853 cni.go:84] Creating CNI manager for ""
	I1018 08:30:39.609563 1276853 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 08:30:39.612752 1276853 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 08:30:39.615561 1276853 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 08:30:39.619691 1276853 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 08:30:39.619758 1276853 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 08:30:39.631978 1276853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 08:30:39.949511 1276853 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 08:30:39.949654 1276853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:39.949733 1276853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-718596 minikube.k8s.io/updated_at=2025_10_18T08_30_39_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820 minikube.k8s.io/name=addons-718596 minikube.k8s.io/primary=true
	I1018 08:30:40.156919 1276853 ops.go:34] apiserver oom_adj: -16
	I1018 08:30:40.157031 1276853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:40.657834 1276853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:41.157185 1276853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:41.657876 1276853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:42.158041 1276853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:42.657168 1276853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:43.157355 1276853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:43.657323 1276853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:44.157673 1276853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:44.657104 1276853 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:30:44.739668 1276853 kubeadm.go:1113] duration metric: took 4.790066572s to wait for elevateKubeSystemPrivileges
	I1018 08:30:44.739698 1276853 kubeadm.go:402] duration metric: took 23.061787025s to StartCluster
	I1018 08:30:44.739716 1276853 settings.go:142] acquiring lock: {Name:mk4240955247baece9b9f2d9a5a8e189cb089184 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:30:44.739828 1276853 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 08:30:44.740241 1276853 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/kubeconfig: {Name:mk4be33efdd3c492a070116d2c0b6f8b234870aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:30:44.740438 1276853 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 08:30:44.740576 1276853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 08:30:44.740815 1276853 config.go:182] Loaded profile config "addons-718596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:30:44.740850 1276853 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1018 08:30:44.740930 1276853 addons.go:69] Setting yakd=true in profile "addons-718596"
	I1018 08:30:44.740942 1276853 addons.go:69] Setting inspektor-gadget=true in profile "addons-718596"
	I1018 08:30:44.740951 1276853 addons.go:69] Setting metrics-server=true in profile "addons-718596"
	I1018 08:30:44.740962 1276853 addons.go:238] Setting addon metrics-server=true in "addons-718596"
	I1018 08:30:44.740963 1276853 addons.go:238] Setting addon inspektor-gadget=true in "addons-718596"
	I1018 08:30:44.740983 1276853 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:30:44.740993 1276853 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:30:44.741440 1276853 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:30:44.741446 1276853 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-718596"
	I1018 08:30:44.741457 1276853 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-718596"
	I1018 08:30:44.741472 1276853 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:30:44.741848 1276853 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:30:44.742099 1276853 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-718596"
	I1018 08:30:44.742118 1276853 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-718596"
	I1018 08:30:44.742140 1276853 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:30:44.742531 1276853 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:30:44.745431 1276853 addons.go:69] Setting cloud-spanner=true in profile "addons-718596"
	I1018 08:30:44.745564 1276853 addons.go:238] Setting addon cloud-spanner=true in "addons-718596"
	I1018 08:30:44.745626 1276853 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:30:44.746136 1276853 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:30:44.748744 1276853 addons.go:69] Setting registry=true in profile "addons-718596"
	I1018 08:30:44.749370 1276853 addons.go:238] Setting addon registry=true in "addons-718596"
	I1018 08:30:44.749445 1276853 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:30:44.741440 1276853 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:30:44.748904 1276853 addons.go:69] Setting registry-creds=true in profile "addons-718596"
	I1018 08:30:44.740947 1276853 addons.go:238] Setting addon yakd=true in "addons-718596"
	I1018 08:30:44.748914 1276853 addons.go:69] Setting storage-provisioner=true in profile "addons-718596"
	I1018 08:30:44.748918 1276853 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-718596"
	I1018 08:30:44.748931 1276853 addons.go:69] Setting volcano=true in profile "addons-718596"
	I1018 08:30:44.748935 1276853 addons.go:69] Setting volumesnapshots=true in profile "addons-718596"
	I1018 08:30:44.749278 1276853 out.go:179] * Verifying Kubernetes components...
	I1018 08:30:44.749720 1276853 addons.go:69] Setting gcp-auth=true in profile "addons-718596"
	I1018 08:30:44.749747 1276853 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-718596"
	I1018 08:30:44.749751 1276853 addons.go:69] Setting default-storageclass=true in profile "addons-718596"
	I1018 08:30:44.749756 1276853 addons.go:69] Setting ingress=true in profile "addons-718596"
	I1018 08:30:44.749759 1276853 addons.go:69] Setting ingress-dns=true in profile "addons-718596"
	I1018 08:30:44.752171 1276853 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:30:44.766961 1276853 addons.go:238] Setting addon registry-creds=true in "addons-718596"
	I1018 08:30:44.767059 1276853 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:30:44.767551 1276853 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:30:44.767629 1276853 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:30:44.768160 1276853 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:30:44.774429 1276853 mustload.go:65] Loading cluster: addons-718596
	I1018 08:30:44.774703 1276853 config.go:182] Loaded profile config "addons-718596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:30:44.774987 1276853 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:30:44.782679 1276853 addons.go:238] Setting addon storage-provisioner=true in "addons-718596"
	I1018 08:30:44.782739 1276853 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:30:44.783223 1276853 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:30:44.793187 1276853 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-718596"
	I1018 08:30:44.793278 1276853 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:30:44.793782 1276853 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:30:44.803586 1276853 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-718596"
	I1018 08:30:44.804027 1276853 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:30:44.808154 1276853 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-718596"
	I1018 08:30:44.808567 1276853 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:30:44.821671 1276853 addons.go:238] Setting addon volcano=true in "addons-718596"
	I1018 08:30:44.821722 1276853 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:30:44.822197 1276853 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:30:44.825596 1276853 addons.go:238] Setting addon ingress=true in "addons-718596"
	I1018 08:30:44.825650 1276853 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:30:44.826120 1276853 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:30:44.840193 1276853 addons.go:238] Setting addon volumesnapshots=true in "addons-718596"
	I1018 08:30:44.840244 1276853 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:30:44.842266 1276853 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:30:44.843617 1276853 addons.go:238] Setting addon ingress-dns=true in "addons-718596"
	I1018 08:30:44.843669 1276853 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:30:44.844301 1276853 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:30:44.904479 1276853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 08:30:44.908748 1276853 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1018 08:30:44.914957 1276853 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1018 08:30:44.914984 1276853 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1018 08:30:44.915056 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:44.958719 1276853 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1018 08:30:44.960010 1276853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 08:30:44.960119 1276853 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1018 08:30:45.008710 1276853 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 08:30:45.008797 1276853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1018 08:30:45.008920 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:45.023218 1276853 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 08:30:45.023240 1276853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1018 08:30:45.023313 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:45.072007 1276853 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1018 08:30:45.075075 1276853 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1018 08:30:45.075109 1276853 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1018 08:30:45.075188 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:45.076700 1276853 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1018 08:30:45.081519 1276853 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1018 08:30:45.081550 1276853 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1018 08:30:45.081648 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:45.087879 1276853 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 08:30:45.092230 1276853 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 08:30:45.092265 1276853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 08:30:45.092354 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:44.960501 1276853 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1018 08:30:45.099480 1276853 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1018 08:30:45.099510 1276853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1018 08:30:45.099594 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:45.112834 1276853 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1018 08:30:45.119976 1276853 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 08:30:45.120011 1276853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1018 08:30:45.120104 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:45.126253 1276853 addons.go:238] Setting addon default-storageclass=true in "addons-718596"
	I1018 08:30:45.126311 1276853 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:30:45.126380 1276853 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1018 08:30:45.130066 1276853 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-718596"
	I1018 08:30:45.130117 1276853 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:30:45.130580 1276853 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:30:45.139933 1276853 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:30:45.142054 1276853 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:30:45.207518 1276853 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 08:30:45.207610 1276853 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1018 08:30:45.211995 1276853 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1018 08:30:45.212132 1276853 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1018 08:30:45.212209 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:45.212730 1276853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	W1018 08:30:45.216749 1276853 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1018 08:30:45.227210 1276853 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1018 08:30:45.232500 1276853 out.go:179]   - Using image docker.io/registry:3.0.0
	I1018 08:30:45.235827 1276853 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1018 08:30:45.235889 1276853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1018 08:30:45.235970 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:45.239015 1276853 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1018 08:30:45.241192 1276853 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 08:30:45.242049 1276853 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 08:30:45.242083 1276853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1018 08:30:45.242161 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:45.281551 1276853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:30:45.245053 1276853 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 08:30:45.282811 1276853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1018 08:30:45.282914 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:45.314441 1276853 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1018 08:30:45.318636 1276853 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1018 08:30:45.321496 1276853 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1018 08:30:45.328128 1276853 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1018 08:30:45.331105 1276853 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1018 08:30:45.336072 1276853 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1018 08:30:45.339026 1276853 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1018 08:30:45.343624 1276853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:30:45.349040 1276853 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1018 08:30:45.352119 1276853 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1018 08:30:45.352148 1276853 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1018 08:30:45.352232 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:45.427970 1276853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:30:45.451433 1276853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:30:45.458766 1276853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:30:45.468671 1276853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:30:45.487331 1276853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:30:45.510786 1276853 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 08:30:45.510807 1276853 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 08:30:45.510868 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:45.523342 1276853 out.go:179]   - Using image docker.io/busybox:stable
	I1018 08:30:45.532078 1276853 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1018 08:30:45.538886 1276853 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 08:30:45.538911 1276853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1018 08:30:45.538991 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:45.549372 1276853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:30:45.564187 1276853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:30:45.571226 1276853 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 08:30:45.573247 1276853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:30:45.576013 1276853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	W1018 08:30:45.582808 1276853 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1018 08:30:45.582844 1276853 retry.go:31] will retry after 135.481831ms: ssh: handshake failed: EOF
	W1018 08:30:45.583061 1276853 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1018 08:30:45.583074 1276853 retry.go:31] will retry after 249.090875ms: ssh: handshake failed: EOF
	I1018 08:30:45.584076 1276853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	W1018 08:30:45.588833 1276853 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1018 08:30:45.588858 1276853 retry.go:31] will retry after 204.502929ms: ssh: handshake failed: EOF
	I1018 08:30:45.618613 1276853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	W1018 08:30:45.620120 1276853 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1018 08:30:45.620143 1276853 retry.go:31] will retry after 277.161385ms: ssh: handshake failed: EOF
	I1018 08:30:45.628156 1276853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:30:45.989764 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 08:30:46.089627 1276853 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1018 08:30:46.089654 1276853 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1018 08:30:46.118691 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1018 08:30:46.121525 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 08:30:46.144776 1276853 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1018 08:30:46.144849 1276853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1018 08:30:46.246226 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 08:30:46.317698 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 08:30:46.335605 1276853 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1018 08:30:46.335680 1276853 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1018 08:30:46.372637 1276853 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1018 08:30:46.372722 1276853 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1018 08:30:46.393155 1276853 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1018 08:30:46.393238 1276853 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1018 08:30:46.453708 1276853 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1018 08:30:46.453788 1276853 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1018 08:30:46.456620 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 08:30:46.476487 1276853 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1018 08:30:46.476561 1276853 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1018 08:30:46.480652 1276853 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:46.480725 1276853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1018 08:30:46.503136 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 08:30:46.546939 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 08:30:46.550273 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 08:30:46.552202 1276853 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 08:30:46.552269 1276853 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1018 08:30:46.601816 1276853 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1018 08:30:46.601887 1276853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1018 08:30:46.605127 1276853 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1018 08:30:46.605201 1276853 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1018 08:30:46.606809 1276853 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1018 08:30:46.606874 1276853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1018 08:30:46.609023 1276853 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1018 08:30:46.609087 1276853 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1018 08:30:46.610968 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:46.731309 1276853 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1018 08:30:46.731387 1276853 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1018 08:30:46.749196 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 08:30:46.759688 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1018 08:30:46.771815 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1018 08:30:46.785111 1276853 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1018 08:30:46.785199 1276853 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1018 08:30:46.947682 1276853 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1018 08:30:46.947707 1276853 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1018 08:30:46.961750 1276853 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.00170476s)
	I1018 08:30:46.961780 1276853 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1018 08:30:46.962735 1276853 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.391485706s)
	I1018 08:30:46.963369 1276853 node_ready.go:35] waiting up to 6m0s for node "addons-718596" to be "Ready" ...
	I1018 08:30:47.027772 1276853 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1018 08:30:47.027805 1276853 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1018 08:30:47.111662 1276853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.121822736s)
	I1018 08:30:47.193446 1276853 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1018 08:30:47.193473 1276853 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1018 08:30:47.394875 1276853 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1018 08:30:47.394909 1276853 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1018 08:30:47.445958 1276853 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1018 08:30:47.445980 1276853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1018 08:30:47.465908 1276853 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-718596" context rescaled to 1 replicas
	I1018 08:30:47.656095 1276853 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 08:30:47.656120 1276853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1018 08:30:47.710527 1276853 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1018 08:30:47.710549 1276853 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1018 08:30:47.848567 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 08:30:47.852206 1276853 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1018 08:30:47.852226 1276853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1018 08:30:48.003413 1276853 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1018 08:30:48.003520 1276853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1018 08:30:48.027641 1276853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.908908126s)
	I1018 08:30:48.027830 1276853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.906236022s)
	I1018 08:30:48.027950 1276853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.781643415s)
	I1018 08:30:48.028029 1276853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.710258815s)
	I1018 08:30:48.159908 1276853 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 08:30:48.159990 1276853 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1018 08:30:48.297155 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1018 08:30:48.979835 1276853 node_ready.go:57] node "addons-718596" has "Ready":"False" status (will retry)
	I1018 08:30:49.525050 1276853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.068348068s)
	I1018 08:30:49.592851 1276853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.089639931s)
	I1018 08:30:50.235077 1276853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.688046818s)
	W1018 08:30:50.980315 1276853 node_ready.go:57] node "addons-718596" has "Ready":"False" status (will retry)
	I1018 08:30:51.018769 1276853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.468408923s)
	I1018 08:30:51.018821 1276853 addons.go:479] Verifying addon ingress=true in "addons-718596"
	I1018 08:30:51.019160 1276853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.408128824s)
	W1018 08:30:51.019192 1276853 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:51.019209 1276853 retry.go:31] will retry after 163.875129ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:51.019315 1276853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.270043394s)
	I1018 08:30:51.019333 1276853 addons.go:479] Verifying addon metrics-server=true in "addons-718596"
	I1018 08:30:51.019393 1276853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.259632277s)
	I1018 08:30:51.019474 1276853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.24757389s)
	I1018 08:30:51.019489 1276853 addons.go:479] Verifying addon registry=true in "addons-718596"
	I1018 08:30:51.019808 1276853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.171208669s)
	W1018 08:30:51.019886 1276853 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 08:30:51.019906 1276853 retry.go:31] will retry after 128.194154ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 08:30:51.022542 1276853 out.go:179] * Verifying registry addon...
	I1018 08:30:51.022652 1276853 out.go:179] * Verifying ingress addon...
	I1018 08:30:51.022676 1276853 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-718596 service yakd-dashboard -n yakd-dashboard
	
	I1018 08:30:51.027144 1276853 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1018 08:30:51.027998 1276853 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1018 08:30:51.031520 1276853 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 08:30:51.031546 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:51.032099 1276853 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1018 08:30:51.032115 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:51.148280 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 08:30:51.184129 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:51.460075 1276853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.162872432s)
	I1018 08:30:51.460161 1276853 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-718596"
	I1018 08:30:51.464546 1276853 out.go:179] * Verifying csi-hostpath-driver addon...
	I1018 08:30:51.468246 1276853 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1018 08:30:51.496140 1276853 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 08:30:51.496162 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:51.605338 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:51.605771 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:51.975737 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:52.077142 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:52.077574 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:52.472449 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:52.531101 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:52.531253 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:52.750342 1276853 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1018 08:30:52.750425 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:52.766446 1276853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:30:52.872507 1276853 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1018 08:30:52.884843 1276853 addons.go:238] Setting addon gcp-auth=true in "addons-718596"
	I1018 08:30:52.884891 1276853 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:30:52.885336 1276853 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:30:52.903387 1276853 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1018 08:30:52.903442 1276853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:30:52.928016 1276853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:30:52.971517 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:53.031940 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:53.032304 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:30:53.466158 1276853 node_ready.go:57] node "addons-718596" has "Ready":"False" status (will retry)
	I1018 08:30:53.471873 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:53.530905 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:53.531121 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:53.977078 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:53.978488 1276853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.83016758s)
	I1018 08:30:53.978600 1276853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.794439994s)
	W1018 08:30:53.978661 1276853 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:53.978684 1276853 retry.go:31] will retry after 556.924767ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:53.978617 1276853 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.075207052s)
	I1018 08:30:53.982094 1276853 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1018 08:30:53.984940 1276853 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 08:30:53.987783 1276853 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1018 08:30:53.987807 1276853 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1018 08:30:54.000739 1276853 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1018 08:30:54.000815 1276853 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1018 08:30:54.017489 1276853 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 08:30:54.017515 1276853 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1018 08:30:54.033455 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 08:30:54.034043 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:54.034248 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:54.484291 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:54.524239 1276853 addons.go:479] Verifying addon gcp-auth=true in "addons-718596"
	I1018 08:30:54.527417 1276853 out.go:179] * Verifying gcp-auth addon...
	I1018 08:30:54.530714 1276853 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1018 08:30:54.536127 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:54.581932 1276853 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1018 08:30:54.581960 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:54.582224 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:54.582294 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:54.971431 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:55.035170 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:55.036731 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:55.037822 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 08:30:55.372876 1276853 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:55.372906 1276853 retry.go:31] will retry after 367.26232ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1018 08:30:55.466835 1276853 node_ready.go:57] node "addons-718596" has "Ready":"False" status (will retry)
	I1018 08:30:55.471397 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:55.530291 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:55.531646 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:55.533806 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:55.741198 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:55.971665 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:56.030610 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:56.032135 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:56.036804 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:56.471856 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:30:56.536381 1276853 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:56.536412 1276853 retry.go:31] will retry after 974.417439ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:56.537397 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:56.537694 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:56.538063 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:56.971684 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:57.032303 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:57.033006 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:57.034810 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 08:30:57.468492 1276853 node_ready.go:57] node "addons-718596" has "Ready":"False" status (will retry)
	I1018 08:30:57.470747 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:57.510962 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:57.531479 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:57.533698 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:57.534584 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:57.971358 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:58.033317 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:58.033329 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:58.035070 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 08:30:58.313662 1276853 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:58.313698 1276853 retry.go:31] will retry after 883.018678ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:30:58.471462 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:58.530137 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:58.531451 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:58.533149 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:58.970822 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:59.030956 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:59.031372 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:59.033299 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:30:59.197530 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:30:59.472460 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:30:59.530742 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:30:59.533405 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:30:59.534804 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 08:30:59.970558 1276853 node_ready.go:57] node "addons-718596" has "Ready":"False" status (will retry)
	I1018 08:30:59.974804 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:31:00.039364 1276853 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:31:00.039462 1276853 retry.go:31] will retry after 1.398897827s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:31:00.045554 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:00.046766 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:00.048482 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:00.471517 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:00.531461 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:00.532274 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:00.533359 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:00.970921 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:01.030906 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:01.031601 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:01.033375 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:01.438611 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:31:01.475066 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:01.531596 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:01.533200 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:01.534487 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:01.973531 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:02.035527 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:02.035890 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:02.037321 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:31:02.252803 1276853 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:31:02.252832 1276853 retry.go:31] will retry after 1.826604097s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1018 08:31:02.466865 1276853 node_ready.go:57] node "addons-718596" has "Ready":"False" status (will retry)
	I1018 08:31:02.471665 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:02.530646 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:02.531416 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:02.533193 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:02.972136 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:03.031412 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:03.031582 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:03.033699 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:03.473174 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:03.531615 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:03.531747 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:03.533421 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:03.972074 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:04.031372 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:04.031507 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:04.033815 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:04.080165 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1018 08:31:04.467038 1276853 node_ready.go:57] node "addons-718596" has "Ready":"False" status (will retry)
	I1018 08:31:04.473361 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:04.536956 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:04.537908 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:04.538460 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:31:04.886403 1276853 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:31:04.886446 1276853 retry.go:31] will retry after 4.750594541s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:31:04.971515 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:05.030693 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:05.031585 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:05.033720 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:05.471345 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:05.530244 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:05.531763 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:05.533336 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:05.971270 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:06.030495 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:06.031531 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:06.034084 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:06.472131 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:06.530893 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:06.531784 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:06.533854 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 08:31:06.966382 1276853 node_ready.go:57] node "addons-718596" has "Ready":"False" status (will retry)
	I1018 08:31:06.971255 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:07.030522 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:07.030657 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:07.033948 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:07.472248 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:07.531029 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:07.531408 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:07.533261 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:07.971973 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:08.031441 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:08.031781 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:08.033256 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:08.471306 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:08.529956 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:08.531257 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:08.533184 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:08.971005 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:09.031081 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:09.031709 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:09.033938 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 08:31:09.466899 1276853 node_ready.go:57] node "addons-718596" has "Ready":"False" status (will retry)
	I1018 08:31:09.471462 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:09.530661 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:09.531668 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:09.533262 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:09.637494 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:31:09.972992 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:10.101684 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:10.102206 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:10.102699 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:10.471256 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:31:10.526557 1276853 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:31:10.526599 1276853 retry.go:31] will retry after 4.835119494s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:31:10.530309 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:10.531783 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:10.533477 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:10.971799 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:11.031240 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:11.031489 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:11.033085 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:11.472032 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:11.531249 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:11.531797 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:11.534036 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 08:31:11.965494 1276853 node_ready.go:57] node "addons-718596" has "Ready":"False" status (will retry)
	I1018 08:31:11.971417 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:12.030579 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:12.031164 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:12.033244 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:12.471145 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:12.531192 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:12.532425 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:12.533452 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:12.971810 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:13.031628 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:13.032245 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:13.039433 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:13.471820 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:13.531458 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:13.531493 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:13.533340 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 08:31:13.966460 1276853 node_ready.go:57] node "addons-718596" has "Ready":"False" status (will retry)
	I1018 08:31:13.971548 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:14.030368 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:14.031578 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:14.033957 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:14.470835 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:14.531141 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:14.531302 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:14.533568 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:14.971060 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:15.030735 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:15.033367 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:15.034294 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:15.362844 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:31:15.471229 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:15.532068 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:15.532747 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:15.537765 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 08:31:15.967527 1276853 node_ready.go:57] node "addons-718596" has "Ready":"False" status (will retry)
	I1018 08:31:15.971371 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:16.031296 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:16.032802 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:16.034043 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 08:31:16.172044 1276853 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:31:16.172077 1276853 retry.go:31] will retry after 7.484678622s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:31:16.471304 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:16.529892 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:16.531014 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:16.533057 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:16.971308 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:17.030452 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:17.032058 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:17.034046 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:17.471479 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:17.534958 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:17.535183 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:17.535436 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:17.971500 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:18.030649 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:18.031746 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:18.034465 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 08:31:18.466318 1276853 node_ready.go:57] node "addons-718596" has "Ready":"False" status (will retry)
	I1018 08:31:18.471333 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:18.530316 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:18.530694 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:18.534233 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:18.970886 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:19.030811 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:19.031398 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:19.033074 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:19.471975 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:19.530515 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:19.531319 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:19.533218 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:19.971270 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:20.031538 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:20.031793 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:20.034391 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 08:31:20.466377 1276853 node_ready.go:57] node "addons-718596" has "Ready":"False" status (will retry)
	I1018 08:31:20.470948 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:20.531034 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:20.531185 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:20.533806 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:20.972158 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:21.030144 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:21.030743 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:21.032842 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:21.471613 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:21.530335 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:21.530927 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:21.532945 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:21.971593 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:22.030373 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:22.030733 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:22.032815 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:22.471304 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:22.531267 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:22.531400 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:22.533692 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 08:31:22.966846 1276853 node_ready.go:57] node "addons-718596" has "Ready":"False" status (will retry)
	I1018 08:31:22.971425 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:23.030129 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:23.030876 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:23.033446 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:23.471698 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:23.531323 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:23.532287 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:23.533629 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:23.657893 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:31:23.972347 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:24.031347 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:24.032703 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:24.034900 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:24.472283 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:31:24.477722 1276853 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:31:24.477750 1276853 retry.go:31] will retry after 19.241906076s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:31:24.531284 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:24.532258 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:24.533119 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:24.971757 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:25.030776 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:25.032035 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:25.033478 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 08:31:25.467166 1276853 node_ready.go:57] node "addons-718596" has "Ready":"False" status (will retry)
	I1018 08:31:25.471785 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:25.531075 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:25.531768 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:25.533515 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:25.971638 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:26.031485 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:26.032269 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:26.033642 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:26.526200 1276853 node_ready.go:49] node "addons-718596" is "Ready"
	I1018 08:31:26.526230 1276853 node_ready.go:38] duration metric: took 39.562829828s for node "addons-718596" to be "Ready" ...
	I1018 08:31:26.526244 1276853 api_server.go:52] waiting for apiserver process to appear ...
	I1018 08:31:26.526300 1276853 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 08:31:26.567525 1276853 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 08:31:26.567551 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:26.570647 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:26.570976 1276853 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 08:31:26.570994 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:26.577257 1276853 api_server.go:72] duration metric: took 41.836784185s to wait for apiserver process to appear ...
	I1018 08:31:26.577280 1276853 api_server.go:88] waiting for apiserver healthz status ...
	I1018 08:31:26.577298 1276853 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 08:31:26.585979 1276853 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1018 08:31:26.586968 1276853 api_server.go:141] control plane version: v1.34.1
	I1018 08:31:26.586993 1276853 api_server.go:131] duration metric: took 9.706726ms to wait for apiserver health ...
	I1018 08:31:26.587006 1276853 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 08:31:26.610190 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:26.611483 1276853 system_pods.go:59] 19 kube-system pods found
	I1018 08:31:26.611519 1276853 system_pods.go:61] "coredns-66bc5c9577-8nftz" [1c905c00-e667-4a38-b424-7fd3901e6887] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 08:31:26.611526 1276853 system_pods.go:61] "csi-hostpath-attacher-0" [c3d36d04-6867-4193-ad59-91c9da0e76e7] Pending
	I1018 08:31:26.611532 1276853 system_pods.go:61] "csi-hostpath-resizer-0" [604115b1-8501-45a5-873c-19f7270110a0] Pending
	I1018 08:31:26.611536 1276853 system_pods.go:61] "csi-hostpathplugin-j45m4" [982988e5-cfc0-4870-adae-1be623576f01] Pending
	I1018 08:31:26.611542 1276853 system_pods.go:61] "etcd-addons-718596" [1dc9a0f9-fd9a-461a-934b-39d733f0335e] Running
	I1018 08:31:26.611550 1276853 system_pods.go:61] "kindnet-nmmrr" [12711d38-a1d4-4a75-a94f-cf45b2742438] Running
	I1018 08:31:26.611582 1276853 system_pods.go:61] "kube-apiserver-addons-718596" [f9e117f2-d65a-40eb-9e43-fd072a2a3403] Running
	I1018 08:31:26.611591 1276853 system_pods.go:61] "kube-controller-manager-addons-718596" [0af36d79-5283-49d3-9c7b-4ac00143781b] Running
	I1018 08:31:26.611598 1276853 system_pods.go:61] "kube-ingress-dns-minikube" [a97622f8-15b9-43b4-a3d9-73a22a00d96f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 08:31:26.611606 1276853 system_pods.go:61] "kube-proxy-ssljd" [a96defad-104b-440d-aeaf-4d7cb9cb8cd1] Running
	I1018 08:31:26.611611 1276853 system_pods.go:61] "kube-scheduler-addons-718596" [e0fc581a-25ee-40be-98a8-ffa382c613cb] Running
	I1018 08:31:26.611619 1276853 system_pods.go:61] "metrics-server-85b7d694d7-qkx7f" [036adf3b-4b77-4d0a-91c5-bb194f6dd6fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 08:31:26.611623 1276853 system_pods.go:61] "nvidia-device-plugin-daemonset-clntn" [5a62626d-1b58-4367-8529-c9f2ba9bae45] Pending
	I1018 08:31:26.611628 1276853 system_pods.go:61] "registry-6b586f9694-6wmvl" [348f7e05-5b38-49a1-93ae-2cbf48215d2e] Pending
	I1018 08:31:26.611634 1276853 system_pods.go:61] "registry-creds-764b6fb674-hhnk4" [5eef93e4-a2d3-4b3d-a3fc-46aaa1ecd9b6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 08:31:26.611638 1276853 system_pods.go:61] "registry-proxy-pvgzm" [d475388a-a4d8-45b8-b290-fd09079b4baf] Pending
	I1018 08:31:26.611644 1276853 system_pods.go:61] "snapshot-controller-7d9fbc56b8-4c88f" [9e6591cd-6c7b-449a-9a2a-e223fc7e547d] Pending
	I1018 08:31:26.611654 1276853 system_pods.go:61] "snapshot-controller-7d9fbc56b8-m2jxk" [de5436c4-9655-4e4e-bd42-e60a8aa87060] Pending
	I1018 08:31:26.611661 1276853 system_pods.go:61] "storage-provisioner" [ee6c3cc8-0da6-4036-8550-3f463daae2bf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 08:31:26.611666 1276853 system_pods.go:74] duration metric: took 24.655369ms to wait for pod list to return data ...
	I1018 08:31:26.611680 1276853 default_sa.go:34] waiting for default service account to be created ...
	I1018 08:31:26.616295 1276853 default_sa.go:45] found service account: "default"
	I1018 08:31:26.616320 1276853 default_sa.go:55] duration metric: took 4.633615ms for default service account to be created ...
	I1018 08:31:26.616329 1276853 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 08:31:26.619946 1276853 system_pods.go:86] 19 kube-system pods found
	I1018 08:31:26.619980 1276853 system_pods.go:89] "coredns-66bc5c9577-8nftz" [1c905c00-e667-4a38-b424-7fd3901e6887] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 08:31:26.619987 1276853 system_pods.go:89] "csi-hostpath-attacher-0" [c3d36d04-6867-4193-ad59-91c9da0e76e7] Pending
	I1018 08:31:26.619992 1276853 system_pods.go:89] "csi-hostpath-resizer-0" [604115b1-8501-45a5-873c-19f7270110a0] Pending
	I1018 08:31:26.619996 1276853 system_pods.go:89] "csi-hostpathplugin-j45m4" [982988e5-cfc0-4870-adae-1be623576f01] Pending
	I1018 08:31:26.620001 1276853 system_pods.go:89] "etcd-addons-718596" [1dc9a0f9-fd9a-461a-934b-39d733f0335e] Running
	I1018 08:31:26.620006 1276853 system_pods.go:89] "kindnet-nmmrr" [12711d38-a1d4-4a75-a94f-cf45b2742438] Running
	I1018 08:31:26.620012 1276853 system_pods.go:89] "kube-apiserver-addons-718596" [f9e117f2-d65a-40eb-9e43-fd072a2a3403] Running
	I1018 08:31:26.620020 1276853 system_pods.go:89] "kube-controller-manager-addons-718596" [0af36d79-5283-49d3-9c7b-4ac00143781b] Running
	I1018 08:31:26.620028 1276853 system_pods.go:89] "kube-ingress-dns-minikube" [a97622f8-15b9-43b4-a3d9-73a22a00d96f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 08:31:26.620037 1276853 system_pods.go:89] "kube-proxy-ssljd" [a96defad-104b-440d-aeaf-4d7cb9cb8cd1] Running
	I1018 08:31:26.620042 1276853 system_pods.go:89] "kube-scheduler-addons-718596" [e0fc581a-25ee-40be-98a8-ffa382c613cb] Running
	I1018 08:31:26.620049 1276853 system_pods.go:89] "metrics-server-85b7d694d7-qkx7f" [036adf3b-4b77-4d0a-91c5-bb194f6dd6fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 08:31:26.620057 1276853 system_pods.go:89] "nvidia-device-plugin-daemonset-clntn" [5a62626d-1b58-4367-8529-c9f2ba9bae45] Pending
	I1018 08:31:26.620061 1276853 system_pods.go:89] "registry-6b586f9694-6wmvl" [348f7e05-5b38-49a1-93ae-2cbf48215d2e] Pending
	I1018 08:31:26.620068 1276853 system_pods.go:89] "registry-creds-764b6fb674-hhnk4" [5eef93e4-a2d3-4b3d-a3fc-46aaa1ecd9b6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 08:31:26.620076 1276853 system_pods.go:89] "registry-proxy-pvgzm" [d475388a-a4d8-45b8-b290-fd09079b4baf] Pending
	I1018 08:31:26.620081 1276853 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4c88f" [9e6591cd-6c7b-449a-9a2a-e223fc7e547d] Pending
	I1018 08:31:26.620085 1276853 system_pods.go:89] "snapshot-controller-7d9fbc56b8-m2jxk" [de5436c4-9655-4e4e-bd42-e60a8aa87060] Pending
	I1018 08:31:26.620100 1276853 system_pods.go:89] "storage-provisioner" [ee6c3cc8-0da6-4036-8550-3f463daae2bf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 08:31:26.620114 1276853 retry.go:31] will retry after 230.981942ms: missing components: kube-dns
	I1018 08:31:26.861934 1276853 system_pods.go:86] 19 kube-system pods found
	I1018 08:31:26.861966 1276853 system_pods.go:89] "coredns-66bc5c9577-8nftz" [1c905c00-e667-4a38-b424-7fd3901e6887] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 08:31:26.861975 1276853 system_pods.go:89] "csi-hostpath-attacher-0" [c3d36d04-6867-4193-ad59-91c9da0e76e7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 08:31:26.861984 1276853 system_pods.go:89] "csi-hostpath-resizer-0" [604115b1-8501-45a5-873c-19f7270110a0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 08:31:26.861993 1276853 system_pods.go:89] "csi-hostpathplugin-j45m4" [982988e5-cfc0-4870-adae-1be623576f01] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 08:31:26.861998 1276853 system_pods.go:89] "etcd-addons-718596" [1dc9a0f9-fd9a-461a-934b-39d733f0335e] Running
	I1018 08:31:26.862003 1276853 system_pods.go:89] "kindnet-nmmrr" [12711d38-a1d4-4a75-a94f-cf45b2742438] Running
	I1018 08:31:26.862007 1276853 system_pods.go:89] "kube-apiserver-addons-718596" [f9e117f2-d65a-40eb-9e43-fd072a2a3403] Running
	I1018 08:31:26.862011 1276853 system_pods.go:89] "kube-controller-manager-addons-718596" [0af36d79-5283-49d3-9c7b-4ac00143781b] Running
	I1018 08:31:26.862017 1276853 system_pods.go:89] "kube-ingress-dns-minikube" [a97622f8-15b9-43b4-a3d9-73a22a00d96f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 08:31:26.862025 1276853 system_pods.go:89] "kube-proxy-ssljd" [a96defad-104b-440d-aeaf-4d7cb9cb8cd1] Running
	I1018 08:31:26.862030 1276853 system_pods.go:89] "kube-scheduler-addons-718596" [e0fc581a-25ee-40be-98a8-ffa382c613cb] Running
	I1018 08:31:26.862044 1276853 system_pods.go:89] "metrics-server-85b7d694d7-qkx7f" [036adf3b-4b77-4d0a-91c5-bb194f6dd6fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 08:31:26.862051 1276853 system_pods.go:89] "nvidia-device-plugin-daemonset-clntn" [5a62626d-1b58-4367-8529-c9f2ba9bae45] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 08:31:26.862062 1276853 system_pods.go:89] "registry-6b586f9694-6wmvl" [348f7e05-5b38-49a1-93ae-2cbf48215d2e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 08:31:26.862068 1276853 system_pods.go:89] "registry-creds-764b6fb674-hhnk4" [5eef93e4-a2d3-4b3d-a3fc-46aaa1ecd9b6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 08:31:26.862076 1276853 system_pods.go:89] "registry-proxy-pvgzm" [d475388a-a4d8-45b8-b290-fd09079b4baf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 08:31:26.862086 1276853 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4c88f" [9e6591cd-6c7b-449a-9a2a-e223fc7e547d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:31:26.862094 1276853 system_pods.go:89] "snapshot-controller-7d9fbc56b8-m2jxk" [de5436c4-9655-4e4e-bd42-e60a8aa87060] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:31:26.862107 1276853 system_pods.go:89] "storage-provisioner" [ee6c3cc8-0da6-4036-8550-3f463daae2bf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 08:31:26.862122 1276853 retry.go:31] will retry after 387.779919ms: missing components: kube-dns
	I1018 08:31:26.974160 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:27.074564 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:27.074750 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:27.074829 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:27.254006 1276853 system_pods.go:86] 19 kube-system pods found
	I1018 08:31:27.254039 1276853 system_pods.go:89] "coredns-66bc5c9577-8nftz" [1c905c00-e667-4a38-b424-7fd3901e6887] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 08:31:27.254047 1276853 system_pods.go:89] "csi-hostpath-attacher-0" [c3d36d04-6867-4193-ad59-91c9da0e76e7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 08:31:27.254055 1276853 system_pods.go:89] "csi-hostpath-resizer-0" [604115b1-8501-45a5-873c-19f7270110a0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 08:31:27.254061 1276853 system_pods.go:89] "csi-hostpathplugin-j45m4" [982988e5-cfc0-4870-adae-1be623576f01] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 08:31:27.254065 1276853 system_pods.go:89] "etcd-addons-718596" [1dc9a0f9-fd9a-461a-934b-39d733f0335e] Running
	I1018 08:31:27.254070 1276853 system_pods.go:89] "kindnet-nmmrr" [12711d38-a1d4-4a75-a94f-cf45b2742438] Running
	I1018 08:31:27.254074 1276853 system_pods.go:89] "kube-apiserver-addons-718596" [f9e117f2-d65a-40eb-9e43-fd072a2a3403] Running
	I1018 08:31:27.254078 1276853 system_pods.go:89] "kube-controller-manager-addons-718596" [0af36d79-5283-49d3-9c7b-4ac00143781b] Running
	I1018 08:31:27.254083 1276853 system_pods.go:89] "kube-ingress-dns-minikube" [a97622f8-15b9-43b4-a3d9-73a22a00d96f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 08:31:27.254087 1276853 system_pods.go:89] "kube-proxy-ssljd" [a96defad-104b-440d-aeaf-4d7cb9cb8cd1] Running
	I1018 08:31:27.254091 1276853 system_pods.go:89] "kube-scheduler-addons-718596" [e0fc581a-25ee-40be-98a8-ffa382c613cb] Running
	I1018 08:31:27.254098 1276853 system_pods.go:89] "metrics-server-85b7d694d7-qkx7f" [036adf3b-4b77-4d0a-91c5-bb194f6dd6fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 08:31:27.254105 1276853 system_pods.go:89] "nvidia-device-plugin-daemonset-clntn" [5a62626d-1b58-4367-8529-c9f2ba9bae45] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 08:31:27.254110 1276853 system_pods.go:89] "registry-6b586f9694-6wmvl" [348f7e05-5b38-49a1-93ae-2cbf48215d2e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 08:31:27.254117 1276853 system_pods.go:89] "registry-creds-764b6fb674-hhnk4" [5eef93e4-a2d3-4b3d-a3fc-46aaa1ecd9b6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 08:31:27.254123 1276853 system_pods.go:89] "registry-proxy-pvgzm" [d475388a-a4d8-45b8-b290-fd09079b4baf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 08:31:27.254129 1276853 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4c88f" [9e6591cd-6c7b-449a-9a2a-e223fc7e547d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:31:27.254137 1276853 system_pods.go:89] "snapshot-controller-7d9fbc56b8-m2jxk" [de5436c4-9655-4e4e-bd42-e60a8aa87060] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:31:27.254142 1276853 system_pods.go:89] "storage-provisioner" [ee6c3cc8-0da6-4036-8550-3f463daae2bf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 08:31:27.254156 1276853 retry.go:31] will retry after 387.411248ms: missing components: kube-dns
	I1018 08:31:27.472381 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:27.533237 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:27.533832 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:27.535262 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:27.648140 1276853 system_pods.go:86] 19 kube-system pods found
	I1018 08:31:27.648178 1276853 system_pods.go:89] "coredns-66bc5c9577-8nftz" [1c905c00-e667-4a38-b424-7fd3901e6887] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 08:31:27.648187 1276853 system_pods.go:89] "csi-hostpath-attacher-0" [c3d36d04-6867-4193-ad59-91c9da0e76e7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 08:31:27.648194 1276853 system_pods.go:89] "csi-hostpath-resizer-0" [604115b1-8501-45a5-873c-19f7270110a0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 08:31:27.648202 1276853 system_pods.go:89] "csi-hostpathplugin-j45m4" [982988e5-cfc0-4870-adae-1be623576f01] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 08:31:27.648207 1276853 system_pods.go:89] "etcd-addons-718596" [1dc9a0f9-fd9a-461a-934b-39d733f0335e] Running
	I1018 08:31:27.648212 1276853 system_pods.go:89] "kindnet-nmmrr" [12711d38-a1d4-4a75-a94f-cf45b2742438] Running
	I1018 08:31:27.648217 1276853 system_pods.go:89] "kube-apiserver-addons-718596" [f9e117f2-d65a-40eb-9e43-fd072a2a3403] Running
	I1018 08:31:27.648222 1276853 system_pods.go:89] "kube-controller-manager-addons-718596" [0af36d79-5283-49d3-9c7b-4ac00143781b] Running
	I1018 08:31:27.648228 1276853 system_pods.go:89] "kube-ingress-dns-minikube" [a97622f8-15b9-43b4-a3d9-73a22a00d96f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 08:31:27.648232 1276853 system_pods.go:89] "kube-proxy-ssljd" [a96defad-104b-440d-aeaf-4d7cb9cb8cd1] Running
	I1018 08:31:27.648244 1276853 system_pods.go:89] "kube-scheduler-addons-718596" [e0fc581a-25ee-40be-98a8-ffa382c613cb] Running
	I1018 08:31:27.648250 1276853 system_pods.go:89] "metrics-server-85b7d694d7-qkx7f" [036adf3b-4b77-4d0a-91c5-bb194f6dd6fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 08:31:27.648257 1276853 system_pods.go:89] "nvidia-device-plugin-daemonset-clntn" [5a62626d-1b58-4367-8529-c9f2ba9bae45] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 08:31:27.648269 1276853 system_pods.go:89] "registry-6b586f9694-6wmvl" [348f7e05-5b38-49a1-93ae-2cbf48215d2e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 08:31:27.648278 1276853 system_pods.go:89] "registry-creds-764b6fb674-hhnk4" [5eef93e4-a2d3-4b3d-a3fc-46aaa1ecd9b6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 08:31:27.648288 1276853 system_pods.go:89] "registry-proxy-pvgzm" [d475388a-a4d8-45b8-b290-fd09079b4baf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 08:31:27.648295 1276853 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4c88f" [9e6591cd-6c7b-449a-9a2a-e223fc7e547d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:31:27.648307 1276853 system_pods.go:89] "snapshot-controller-7d9fbc56b8-m2jxk" [de5436c4-9655-4e4e-bd42-e60a8aa87060] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:31:27.648311 1276853 system_pods.go:89] "storage-provisioner" [ee6c3cc8-0da6-4036-8550-3f463daae2bf] Running
	I1018 08:31:27.648327 1276853 retry.go:31] will retry after 383.879169ms: missing components: kube-dns
	I1018 08:31:27.971710 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:28.072779 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:28.072999 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:28.075873 1276853 system_pods.go:86] 19 kube-system pods found
	I1018 08:31:28.081909 1276853 system_pods.go:89] "coredns-66bc5c9577-8nftz" [1c905c00-e667-4a38-b424-7fd3901e6887] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 08:31:28.081930 1276853 system_pods.go:89] "csi-hostpath-attacher-0" [c3d36d04-6867-4193-ad59-91c9da0e76e7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 08:31:28.081938 1276853 system_pods.go:89] "csi-hostpath-resizer-0" [604115b1-8501-45a5-873c-19f7270110a0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 08:31:28.081946 1276853 system_pods.go:89] "csi-hostpathplugin-j45m4" [982988e5-cfc0-4870-adae-1be623576f01] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 08:31:28.081951 1276853 system_pods.go:89] "etcd-addons-718596" [1dc9a0f9-fd9a-461a-934b-39d733f0335e] Running
	I1018 08:31:28.081957 1276853 system_pods.go:89] "kindnet-nmmrr" [12711d38-a1d4-4a75-a94f-cf45b2742438] Running
	I1018 08:31:28.081967 1276853 system_pods.go:89] "kube-apiserver-addons-718596" [f9e117f2-d65a-40eb-9e43-fd072a2a3403] Running
	I1018 08:31:28.081973 1276853 system_pods.go:89] "kube-controller-manager-addons-718596" [0af36d79-5283-49d3-9c7b-4ac00143781b] Running
	I1018 08:31:28.081980 1276853 system_pods.go:89] "kube-ingress-dns-minikube" [a97622f8-15b9-43b4-a3d9-73a22a00d96f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 08:31:28.081984 1276853 system_pods.go:89] "kube-proxy-ssljd" [a96defad-104b-440d-aeaf-4d7cb9cb8cd1] Running
	I1018 08:31:28.081990 1276853 system_pods.go:89] "kube-scheduler-addons-718596" [e0fc581a-25ee-40be-98a8-ffa382c613cb] Running
	I1018 08:31:28.081997 1276853 system_pods.go:89] "metrics-server-85b7d694d7-qkx7f" [036adf3b-4b77-4d0a-91c5-bb194f6dd6fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 08:31:28.082006 1276853 system_pods.go:89] "nvidia-device-plugin-daemonset-clntn" [5a62626d-1b58-4367-8529-c9f2ba9bae45] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 08:31:28.082013 1276853 system_pods.go:89] "registry-6b586f9694-6wmvl" [348f7e05-5b38-49a1-93ae-2cbf48215d2e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 08:31:28.082020 1276853 system_pods.go:89] "registry-creds-764b6fb674-hhnk4" [5eef93e4-a2d3-4b3d-a3fc-46aaa1ecd9b6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 08:31:28.082026 1276853 system_pods.go:89] "registry-proxy-pvgzm" [d475388a-a4d8-45b8-b290-fd09079b4baf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 08:31:28.082032 1276853 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4c88f" [9e6591cd-6c7b-449a-9a2a-e223fc7e547d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:31:28.082043 1276853 system_pods.go:89] "snapshot-controller-7d9fbc56b8-m2jxk" [de5436c4-9655-4e4e-bd42-e60a8aa87060] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:31:28.082047 1276853 system_pods.go:89] "storage-provisioner" [ee6c3cc8-0da6-4036-8550-3f463daae2bf] Running
	I1018 08:31:28.082064 1276853 retry.go:31] will retry after 484.318948ms: missing components: kube-dns
	I1018 08:31:28.077214 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:28.472319 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:28.533176 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:28.542820 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:28.544593 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:28.576761 1276853 system_pods.go:86] 19 kube-system pods found
	I1018 08:31:28.576839 1276853 system_pods.go:89] "coredns-66bc5c9577-8nftz" [1c905c00-e667-4a38-b424-7fd3901e6887] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 08:31:28.576866 1276853 system_pods.go:89] "csi-hostpath-attacher-0" [c3d36d04-6867-4193-ad59-91c9da0e76e7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 08:31:28.576913 1276853 system_pods.go:89] "csi-hostpath-resizer-0" [604115b1-8501-45a5-873c-19f7270110a0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 08:31:28.576943 1276853 system_pods.go:89] "csi-hostpathplugin-j45m4" [982988e5-cfc0-4870-adae-1be623576f01] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 08:31:28.576963 1276853 system_pods.go:89] "etcd-addons-718596" [1dc9a0f9-fd9a-461a-934b-39d733f0335e] Running
	I1018 08:31:28.576984 1276853 system_pods.go:89] "kindnet-nmmrr" [12711d38-a1d4-4a75-a94f-cf45b2742438] Running
	I1018 08:31:28.577003 1276853 system_pods.go:89] "kube-apiserver-addons-718596" [f9e117f2-d65a-40eb-9e43-fd072a2a3403] Running
	I1018 08:31:28.577034 1276853 system_pods.go:89] "kube-controller-manager-addons-718596" [0af36d79-5283-49d3-9c7b-4ac00143781b] Running
	I1018 08:31:28.577057 1276853 system_pods.go:89] "kube-ingress-dns-minikube" [a97622f8-15b9-43b4-a3d9-73a22a00d96f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 08:31:28.577076 1276853 system_pods.go:89] "kube-proxy-ssljd" [a96defad-104b-440d-aeaf-4d7cb9cb8cd1] Running
	I1018 08:31:28.577107 1276853 system_pods.go:89] "kube-scheduler-addons-718596" [e0fc581a-25ee-40be-98a8-ffa382c613cb] Running
	I1018 08:31:28.577129 1276853 system_pods.go:89] "metrics-server-85b7d694d7-qkx7f" [036adf3b-4b77-4d0a-91c5-bb194f6dd6fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 08:31:28.577148 1276853 system_pods.go:89] "nvidia-device-plugin-daemonset-clntn" [5a62626d-1b58-4367-8529-c9f2ba9bae45] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 08:31:28.577169 1276853 system_pods.go:89] "registry-6b586f9694-6wmvl" [348f7e05-5b38-49a1-93ae-2cbf48215d2e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 08:31:28.577190 1276853 system_pods.go:89] "registry-creds-764b6fb674-hhnk4" [5eef93e4-a2d3-4b3d-a3fc-46aaa1ecd9b6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 08:31:28.577225 1276853 system_pods.go:89] "registry-proxy-pvgzm" [d475388a-a4d8-45b8-b290-fd09079b4baf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 08:31:28.577246 1276853 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4c88f" [9e6591cd-6c7b-449a-9a2a-e223fc7e547d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:31:28.577272 1276853 system_pods.go:89] "snapshot-controller-7d9fbc56b8-m2jxk" [de5436c4-9655-4e4e-bd42-e60a8aa87060] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:31:28.577300 1276853 system_pods.go:89] "storage-provisioner" [ee6c3cc8-0da6-4036-8550-3f463daae2bf] Running
	I1018 08:31:28.577333 1276853 retry.go:31] will retry after 892.542789ms: missing components: kube-dns
	I1018 08:31:28.972227 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:29.030069 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:29.031180 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:29.032911 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:29.477776 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:29.478597 1276853 system_pods.go:86] 19 kube-system pods found
	I1018 08:31:29.478629 1276853 system_pods.go:89] "coredns-66bc5c9577-8nftz" [1c905c00-e667-4a38-b424-7fd3901e6887] Running
	I1018 08:31:29.478672 1276853 system_pods.go:89] "csi-hostpath-attacher-0" [c3d36d04-6867-4193-ad59-91c9da0e76e7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 08:31:29.478689 1276853 system_pods.go:89] "csi-hostpath-resizer-0" [604115b1-8501-45a5-873c-19f7270110a0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 08:31:29.478697 1276853 system_pods.go:89] "csi-hostpathplugin-j45m4" [982988e5-cfc0-4870-adae-1be623576f01] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 08:31:29.478705 1276853 system_pods.go:89] "etcd-addons-718596" [1dc9a0f9-fd9a-461a-934b-39d733f0335e] Running
	I1018 08:31:29.478710 1276853 system_pods.go:89] "kindnet-nmmrr" [12711d38-a1d4-4a75-a94f-cf45b2742438] Running
	I1018 08:31:29.478715 1276853 system_pods.go:89] "kube-apiserver-addons-718596" [f9e117f2-d65a-40eb-9e43-fd072a2a3403] Running
	I1018 08:31:29.478725 1276853 system_pods.go:89] "kube-controller-manager-addons-718596" [0af36d79-5283-49d3-9c7b-4ac00143781b] Running
	I1018 08:31:29.478757 1276853 system_pods.go:89] "kube-ingress-dns-minikube" [a97622f8-15b9-43b4-a3d9-73a22a00d96f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 08:31:29.478772 1276853 system_pods.go:89] "kube-proxy-ssljd" [a96defad-104b-440d-aeaf-4d7cb9cb8cd1] Running
	I1018 08:31:29.478784 1276853 system_pods.go:89] "kube-scheduler-addons-718596" [e0fc581a-25ee-40be-98a8-ffa382c613cb] Running
	I1018 08:31:29.478790 1276853 system_pods.go:89] "metrics-server-85b7d694d7-qkx7f" [036adf3b-4b77-4d0a-91c5-bb194f6dd6fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 08:31:29.478797 1276853 system_pods.go:89] "nvidia-device-plugin-daemonset-clntn" [5a62626d-1b58-4367-8529-c9f2ba9bae45] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 08:31:29.478803 1276853 system_pods.go:89] "registry-6b586f9694-6wmvl" [348f7e05-5b38-49a1-93ae-2cbf48215d2e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 08:31:29.478809 1276853 system_pods.go:89] "registry-creds-764b6fb674-hhnk4" [5eef93e4-a2d3-4b3d-a3fc-46aaa1ecd9b6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 08:31:29.478832 1276853 system_pods.go:89] "registry-proxy-pvgzm" [d475388a-a4d8-45b8-b290-fd09079b4baf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 08:31:29.478848 1276853 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4c88f" [9e6591cd-6c7b-449a-9a2a-e223fc7e547d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:31:29.478856 1276853 system_pods.go:89] "snapshot-controller-7d9fbc56b8-m2jxk" [de5436c4-9655-4e4e-bd42-e60a8aa87060] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:31:29.478878 1276853 system_pods.go:89] "storage-provisioner" [ee6c3cc8-0da6-4036-8550-3f463daae2bf] Running
	I1018 08:31:29.478888 1276853 system_pods.go:126] duration metric: took 2.862551766s to wait for k8s-apps to be running ...
	I1018 08:31:29.478929 1276853 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 08:31:29.479007 1276853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 08:31:29.495523 1276853 system_svc.go:56] duration metric: took 16.586513ms WaitForService to wait for kubelet
	I1018 08:31:29.495590 1276853 kubeadm.go:586] duration metric: took 44.755119866s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 08:31:29.495624 1276853 node_conditions.go:102] verifying NodePressure condition ...
	I1018 08:31:29.498672 1276853 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 08:31:29.498737 1276853 node_conditions.go:123] node cpu capacity is 2
	I1018 08:31:29.498766 1276853 node_conditions.go:105] duration metric: took 3.120231ms to run NodePressure ...
	I1018 08:31:29.498791 1276853 start.go:241] waiting for startup goroutines ...
	I1018 08:31:29.532625 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:29.532977 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:29.534509 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:29.972949 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:30.073069 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:30.073313 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:30.073300 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:30.476272 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:30.533795 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:30.534218 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:30.535947 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:30.972974 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:31.035160 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:31.036077 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:31.036568 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:31.472983 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:31.535995 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:31.536578 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:31.538821 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:31.972326 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:32.036643 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:32.037166 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:32.041877 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:32.472658 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:32.535877 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:32.551001 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:32.551577 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:32.972831 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:33.030856 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:33.032723 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:33.034115 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:33.472590 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:33.531915 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:33.531916 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:33.534664 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:33.972625 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:34.033944 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:34.034064 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:34.073711 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:34.471802 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:34.532444 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:34.534130 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:34.535152 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:34.972574 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:35.034366 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:35.034793 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:35.035912 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:35.471877 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:35.531229 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:35.531334 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:35.533659 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:35.971899 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:36.033035 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:36.033345 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:36.036276 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:36.472129 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:36.531360 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:36.533871 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:36.534735 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:36.971908 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:37.072928 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:37.073252 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:37.073551 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:37.473317 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:37.530686 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:37.531407 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:37.533068 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:37.971830 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:38.030800 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:38.031448 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:38.033376 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:38.473591 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:38.532962 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:38.533417 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:38.535502 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:38.972725 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:39.033519 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:39.034767 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:39.035253 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:39.472527 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:39.533354 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:39.533732 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:39.538067 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:39.972094 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:40.034670 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:40.035231 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:40.037351 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:40.472298 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:40.532571 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:40.534011 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:40.535397 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:40.972411 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:41.031824 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:41.033070 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:41.033924 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:41.471826 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:41.531227 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:41.531371 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:41.533138 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:41.971649 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:42.032601 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:42.032765 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:42.035644 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:42.473172 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:42.533333 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:42.535365 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:42.535878 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:42.972743 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:43.031292 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:43.031752 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:43.033860 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:43.473938 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:43.531677 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:43.531949 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:43.534566 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:43.719823 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:31:43.974418 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:44.032734 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:44.034989 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:44.036687 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:44.471548 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:44.533259 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:44.534431 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:44.541004 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:44.928037 1276853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.20815594s)
	W1018 08:31:44.928077 1276853 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:31:44.928117 1276853 retry.go:31] will retry after 22.62489028s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:31:44.972879 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:45.035957 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:45.037909 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:45.038720 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:45.480249 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:45.545289 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:45.545358 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:45.545807 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:45.973082 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:46.031206 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:46.031383 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:46.033563 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:46.472124 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:46.535912 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:46.536073 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:46.537898 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:46.973830 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:47.032655 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:47.033216 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:47.035718 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:47.473390 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:47.576816 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:47.577360 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:47.578068 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:47.972564 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:48.038085 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:48.038299 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:48.039087 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:48.471350 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:48.561316 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:48.561405 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:48.561841 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:48.972286 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:49.033236 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:49.033604 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:49.035310 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:49.472151 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:49.535089 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:49.535649 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:49.536523 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:49.972503 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:50.036065 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:50.036516 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:50.037254 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:50.472237 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:50.533503 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:50.534356 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:50.536051 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:50.972180 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:51.032029 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:51.033629 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:51.035188 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:51.471941 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:51.533795 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:51.535808 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:51.536091 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:51.971645 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:52.034928 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:52.035405 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:52.035484 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:52.477759 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:52.533717 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:52.534538 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:52.535548 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:52.973256 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:53.035033 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:53.035665 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:53.036102 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:53.471209 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:53.531234 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:53.533190 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:53.534196 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:53.971865 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:54.033875 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:54.034628 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:54.037123 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:54.472473 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:54.531199 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:54.532211 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:54.533591 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:54.973007 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:55.033258 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:55.035690 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:55.037490 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:55.472398 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:55.531046 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:55.531967 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:55.533626 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:55.971780 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:56.032077 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:56.032162 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:56.034042 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:56.471861 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:56.553619 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:56.554213 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:56.555659 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:56.973122 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:57.033954 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:57.034035 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:57.034628 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:57.473010 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:57.533030 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:57.534667 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:57.536593 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:57.972263 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:58.032298 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:58.032431 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:58.034396 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:58.471726 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:58.535563 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:58.536406 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:58.537072 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:58.971881 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:59.073055 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:59.073199 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:59.073310 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:59.472346 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:31:59.530110 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:31:59.531508 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:31:59.533307 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:31:59.971907 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:00.035147 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:00.037448 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:32:00.037793 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:00.472534 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:00.533180 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:00.533496 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:32:00.534758 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:00.972445 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:01.072670 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:01.072792 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:32:01.073526 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:01.471979 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:01.532607 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:01.532771 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:32:01.534512 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:01.973642 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:02.035808 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:32:02.036787 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:02.037287 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:02.472018 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:02.531384 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:32:02.531481 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:02.533225 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:02.971945 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:03.031514 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:32:03.032417 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:03.044174 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:03.471487 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:03.531053 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:32:03.531225 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:03.533889 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:03.972436 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:04.031240 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:32:04.033146 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:04.034970 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:04.472252 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:04.531263 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:32:04.532206 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:04.533413 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:04.973050 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:05.033644 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:05.034141 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:32:05.038020 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:05.471739 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:05.535241 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:32:05.535657 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:05.538127 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:05.972245 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:06.032366 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:06.034390 1276853 kapi.go:107] duration metric: took 1m15.007247767s to wait for kubernetes.io/minikube-addons=registry ...
	I1018 08:32:06.037492 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:06.472575 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:06.532524 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:06.535505 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:06.973309 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:07.032715 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:07.033363 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:07.472336 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:07.531623 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:07.533547 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:07.553744 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:32:07.974988 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:08.032403 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:08.034147 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:08.472028 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:08.531590 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:08.533391 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:08.716882 1276853 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.163049236s)
	W1018 08:32:08.716970 1276853 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:32:08.717003 1276853 retry.go:31] will retry after 31.278700369s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:32:08.971775 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:09.032614 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:09.034438 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:09.471905 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:09.530961 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:09.533278 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:09.971486 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:10.038694 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:10.039404 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:10.472279 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:10.531466 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:10.534039 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:10.972493 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:11.073441 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:11.073900 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:11.472870 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:11.532069 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:11.533985 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:11.971375 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:12.033065 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:12.034101 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:12.472255 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:12.531578 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:12.533991 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:12.972050 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:13.032601 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:13.034015 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:13.472255 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:13.532522 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:13.534049 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:13.972773 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:14.031737 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:14.034243 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:14.472447 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:14.538878 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:14.540423 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:14.972447 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:15.052505 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:15.054314 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:15.472947 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:15.531745 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:15.533853 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:15.972871 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:16.034580 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:16.036176 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:16.472109 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:16.531408 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:16.533659 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:16.972451 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:17.034376 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:17.034661 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:17.472599 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:17.531655 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:17.534410 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:17.972547 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:18.033340 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:18.035463 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:18.472358 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:18.531673 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:18.533514 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:32:18.975987 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:19.075190 1276853 kapi.go:107] duration metric: took 1m24.544472091s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1018 08:32:19.075657 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:19.078257 1276853 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-718596 cluster.
	I1018 08:32:19.081236 1276853 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1018 08:32:19.084301 1276853 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1018 08:32:19.472733 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:19.531518 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:19.971667 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:20.031779 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:20.478065 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:20.531053 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:20.972069 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:21.036843 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:21.472290 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:21.531428 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:21.971774 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:22.033396 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:22.472027 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:22.535965 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:22.973667 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:23.036451 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:23.476391 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:23.531442 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:23.972443 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:24.032020 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:24.475576 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:24.531259 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:24.986547 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:25.073352 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:25.472250 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:25.531362 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:25.972217 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:26.031777 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:26.471956 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:26.531190 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:26.972463 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:27.031591 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:27.472332 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:27.531202 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:27.974297 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:28.032544 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:28.471413 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:28.531352 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:28.975702 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:29.075944 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:29.476258 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:29.534132 1276853 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:32:29.972634 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:30.034842 1276853 kapi.go:107] duration metric: took 1m39.006839949s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1018 08:32:30.472286 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:30.972556 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:31.472527 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:31.972669 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:32.504910 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:32.971776 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:33.472216 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:33.971685 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:34.471822 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:34.972518 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:35.472691 1276853 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:32:35.973458 1276853 kapi.go:107] duration metric: took 1m44.505211733s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1018 08:32:39.997765 1276853 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1018 08:32:40.863164 1276853 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1018 08:32:40.863262 1276853 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1018 08:32:40.866355 1276853 out.go:179] * Enabled addons: default-storageclass, cloud-spanner, nvidia-device-plugin, amd-gpu-device-plugin, registry-creds, storage-provisioner, ingress-dns, storage-provisioner-rancher, metrics-server, yakd, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1018 08:32:40.869408 1276853 addons.go:514] duration metric: took 1m56.128538199s for enable addons: enabled=[default-storageclass cloud-spanner nvidia-device-plugin amd-gpu-device-plugin registry-creds storage-provisioner ingress-dns storage-provisioner-rancher metrics-server yakd volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1018 08:32:40.869462 1276853 start.go:246] waiting for cluster config update ...
	I1018 08:32:40.869488 1276853 start.go:255] writing updated cluster config ...
	I1018 08:32:40.869786 1276853 ssh_runner.go:195] Run: rm -f paused
	I1018 08:32:40.873432 1276853 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 08:32:40.877961 1276853 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8nftz" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:32:40.886213 1276853 pod_ready.go:94] pod "coredns-66bc5c9577-8nftz" is "Ready"
	I1018 08:32:40.886244 1276853 pod_ready.go:86] duration metric: took 8.247591ms for pod "coredns-66bc5c9577-8nftz" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:32:40.888749 1276853 pod_ready.go:83] waiting for pod "etcd-addons-718596" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:32:40.893644 1276853 pod_ready.go:94] pod "etcd-addons-718596" is "Ready"
	I1018 08:32:40.893676 1276853 pod_ready.go:86] duration metric: took 4.899517ms for pod "etcd-addons-718596" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:32:40.897098 1276853 pod_ready.go:83] waiting for pod "kube-apiserver-addons-718596" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:32:40.901857 1276853 pod_ready.go:94] pod "kube-apiserver-addons-718596" is "Ready"
	I1018 08:32:40.901882 1276853 pod_ready.go:86] duration metric: took 4.75662ms for pod "kube-apiserver-addons-718596" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:32:40.904273 1276853 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-718596" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:32:41.277891 1276853 pod_ready.go:94] pod "kube-controller-manager-addons-718596" is "Ready"
	I1018 08:32:41.277921 1276853 pod_ready.go:86] duration metric: took 373.580209ms for pod "kube-controller-manager-addons-718596" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:32:41.477375 1276853 pod_ready.go:83] waiting for pod "kube-proxy-ssljd" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:32:41.877619 1276853 pod_ready.go:94] pod "kube-proxy-ssljd" is "Ready"
	I1018 08:32:41.877652 1276853 pod_ready.go:86] duration metric: took 400.240693ms for pod "kube-proxy-ssljd" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:32:42.079529 1276853 pod_ready.go:83] waiting for pod "kube-scheduler-addons-718596" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:32:42.477252 1276853 pod_ready.go:94] pod "kube-scheduler-addons-718596" is "Ready"
	I1018 08:32:42.477295 1276853 pod_ready.go:86] duration metric: took 397.734515ms for pod "kube-scheduler-addons-718596" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:32:42.477307 1276853 pod_ready.go:40] duration metric: took 1.603843279s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 08:32:42.530327 1276853 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 08:32:42.533364 1276853 out.go:179] * Done! kubectl is now configured to use "addons-718596" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 08:32:39 addons-718596 crio[827]: time="2025-10-18T08:32:39.01904747Z" level=info msg="Stopped pod sandbox (already stopped): 077bf7525067ca853093b6b7ad10f1dd098e6756cdf431b9254b6c4f6110471f" id=d71a7b11-2393-4fea-80a9-0764ced90bfb name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 18 08:32:39 addons-718596 crio[827]: time="2025-10-18T08:32:39.019483874Z" level=info msg="Removing pod sandbox: 077bf7525067ca853093b6b7ad10f1dd098e6756cdf431b9254b6c4f6110471f" id=390504d3-e855-461e-9f49-fffe838d7d97 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 18 08:32:39 addons-718596 crio[827]: time="2025-10-18T08:32:39.024425031Z" level=info msg="Removed pod sandbox: 077bf7525067ca853093b6b7ad10f1dd098e6756cdf431b9254b6c4f6110471f" id=390504d3-e855-461e-9f49-fffe838d7d97 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 18 08:32:43 addons-718596 crio[827]: time="2025-10-18T08:32:43.553404035Z" level=info msg="Running pod sandbox: default/busybox/POD" id=fe5c0b6a-f3fe-4c7d-bc45-7cca64b53af1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 08:32:43 addons-718596 crio[827]: time="2025-10-18T08:32:43.55347888Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 08:32:43 addons-718596 crio[827]: time="2025-10-18T08:32:43.564463566Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:6bb6f44141a8178d1da7c88ab7e82186257ca60519761abf9cd1d7bd9ad0e150 UID:0d911a7b-137f-4786-84c4-787c87e49cd2 NetNS:/var/run/netns/e8db460a-22b6-4924-9744-be58e9a5a3da Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001888360}] Aliases:map[]}"
	Oct 18 08:32:43 addons-718596 crio[827]: time="2025-10-18T08:32:43.564504Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 18 08:32:43 addons-718596 crio[827]: time="2025-10-18T08:32:43.574573878Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:6bb6f44141a8178d1da7c88ab7e82186257ca60519761abf9cd1d7bd9ad0e150 UID:0d911a7b-137f-4786-84c4-787c87e49cd2 NetNS:/var/run/netns/e8db460a-22b6-4924-9744-be58e9a5a3da Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001888360}] Aliases:map[]}"
	Oct 18 08:32:43 addons-718596 crio[827]: time="2025-10-18T08:32:43.574719278Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 18 08:32:43 addons-718596 crio[827]: time="2025-10-18T08:32:43.577484704Z" level=info msg="Ran pod sandbox 6bb6f44141a8178d1da7c88ab7e82186257ca60519761abf9cd1d7bd9ad0e150 with infra container: default/busybox/POD" id=fe5c0b6a-f3fe-4c7d-bc45-7cca64b53af1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 08:32:43 addons-718596 crio[827]: time="2025-10-18T08:32:43.581423073Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2f0a749e-bf64-4ef9-9b0e-a4fae51efc14 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 08:32:43 addons-718596 crio[827]: time="2025-10-18T08:32:43.581837659Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=2f0a749e-bf64-4ef9-9b0e-a4fae51efc14 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 08:32:43 addons-718596 crio[827]: time="2025-10-18T08:32:43.58198186Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=2f0a749e-bf64-4ef9-9b0e-a4fae51efc14 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 08:32:43 addons-718596 crio[827]: time="2025-10-18T08:32:43.583199626Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6f1ca504-df14-434e-b95a-f646a3793a11 name=/runtime.v1.ImageService/PullImage
	Oct 18 08:32:43 addons-718596 crio[827]: time="2025-10-18T08:32:43.585403155Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 18 08:32:45 addons-718596 crio[827]: time="2025-10-18T08:32:45.565462633Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=6f1ca504-df14-434e-b95a-f646a3793a11 name=/runtime.v1.ImageService/PullImage
	Oct 18 08:32:45 addons-718596 crio[827]: time="2025-10-18T08:32:45.566117754Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a7eafae7-21d5-4953-8dcd-a80ebac679c4 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 08:32:45 addons-718596 crio[827]: time="2025-10-18T08:32:45.569070253Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1cf15b2e-38f0-466c-9f91-cbd187fa74d4 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 08:32:45 addons-718596 crio[827]: time="2025-10-18T08:32:45.578188804Z" level=info msg="Creating container: default/busybox/busybox" id=c870d932-6d2d-4f29-82e4-7513849d1fbd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 08:32:45 addons-718596 crio[827]: time="2025-10-18T08:32:45.579336213Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 08:32:45 addons-718596 crio[827]: time="2025-10-18T08:32:45.586277581Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 08:32:45 addons-718596 crio[827]: time="2025-10-18T08:32:45.586917909Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 08:32:45 addons-718596 crio[827]: time="2025-10-18T08:32:45.605644796Z" level=info msg="Created container aeb26a57fbbe386e9b4a67c1deaed4be59686e8520e2b16e771537a2948f0d7f: default/busybox/busybox" id=c870d932-6d2d-4f29-82e4-7513849d1fbd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 08:32:45 addons-718596 crio[827]: time="2025-10-18T08:32:45.606594925Z" level=info msg="Starting container: aeb26a57fbbe386e9b4a67c1deaed4be59686e8520e2b16e771537a2948f0d7f" id=3ffcd056-934d-4dd3-907e-6f65cdf6ce9d name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 08:32:45 addons-718596 crio[827]: time="2025-10-18T08:32:45.60830684Z" level=info msg="Started container" PID=5096 containerID=aeb26a57fbbe386e9b4a67c1deaed4be59686e8520e2b16e771537a2948f0d7f description=default/busybox/busybox id=3ffcd056-934d-4dd3-907e-6f65cdf6ce9d name=/runtime.v1.RuntimeService/StartContainer sandboxID=6bb6f44141a8178d1da7c88ab7e82186257ca60519761abf9cd1d7bd9ad0e150
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	aeb26a57fbbe3       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          8 seconds ago        Running             busybox                                  0                   6bb6f44141a81       busybox                                     default
	f2ba69481cca4       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          18 seconds ago       Running             csi-snapshotter                          0                   703b4beb211c7       csi-hostpathplugin-j45m4                    kube-system
	fee77718765ce       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          19 seconds ago       Running             csi-provisioner                          0                   703b4beb211c7       csi-hostpathplugin-j45m4                    kube-system
	2b38f5de44735       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            21 seconds ago       Running             liveness-probe                           0                   703b4beb211c7       csi-hostpathplugin-j45m4                    kube-system
	60f3656a31e33       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           22 seconds ago       Running             hostpath                                 0                   703b4beb211c7       csi-hostpathplugin-j45m4                    kube-system
	69416bfe918f8       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                23 seconds ago       Running             node-driver-registrar                    0                   703b4beb211c7       csi-hostpathplugin-j45m4                    kube-system
	974724db9b42c       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             24 seconds ago       Running             controller                               0                   5f4e85036be20       ingress-nginx-controller-675c5ddd98-jnjlc   ingress-nginx
	155e540f1af62       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            31 seconds ago       Running             gadget                                   0                   b76d452a09331       gadget-bht4v                                gadget
	94be5aa873ec7       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 35 seconds ago       Running             gcp-auth                                 0                   fb0d3d4091691       gcp-auth-78565c9fb4-ftmb2                   gcp-auth
	dd33128f289f9       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             38 seconds ago       Running             local-path-provisioner                   0                   302fabf263ae0       local-path-provisioner-648f6765c9-jb247     local-path-storage
	28a70a60eb5fb       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               39 seconds ago       Running             minikube-ingress-dns                     0                   b59f4d16c35d6       kube-ingress-dns-minikube                   kube-system
	9b6001d8d045b       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              48 seconds ago       Running             registry-proxy                           0                   3da4b786d6a80       registry-proxy-pvgzm                        kube-system
	20dde0b8d894a       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   51 seconds ago       Running             csi-external-health-monitor-controller   0                   703b4beb211c7       csi-hostpathplugin-j45m4                    kube-system
	1a2a1784a32ed       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              52 seconds ago       Running             csi-resizer                              0                   b7eacbf9d1753       csi-hostpath-resizer-0                      kube-system
	41255395d4ccf       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     54 seconds ago       Running             nvidia-device-plugin-ctr                 0                   795b96639631e       nvidia-device-plugin-daemonset-clntn        kube-system
	823da2535019f       9a80c0c8eb61cb88536fa58caaf18357fffd3e9fd0481b2781dfc6359f7654c9                                                                             55 seconds ago       Exited              patch                                    2                   39d26ec8d909d       ingress-nginx-admission-patch-mt9m7         ingress-nginx
	df452f4c1f840       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago   Running             registry                                 0                   7d36a5a11c987       registry-6b586f9694-6wmvl                   kube-system
	b7afa4a4426cb       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   402d8f3a03b7c       snapshot-controller-7d9fbc56b8-4c88f        kube-system
	f56bc5e257a76       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   About a minute ago   Exited              create                                   0                   552e06887980e       ingress-nginx-admission-create-vfgl2        ingress-nginx
	73bba38c93d1d       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   06741663bf9b7       csi-hostpath-attacher-0                     kube-system
	8ed328ea6d0f5       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago   Running             yakd                                     0                   cee1fb1be16af       yakd-dashboard-5ff678cb9-8568g              yakd-dashboard
	3341cb4941ef2       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   f76f7becd7b64       snapshot-controller-7d9fbc56b8-m2jxk        kube-system
	919fd269f9a19       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               About a minute ago   Running             cloud-spanner-emulator                   0                   f34b145e4a2f1       cloud-spanner-emulator-86bd5cbb97-8gkdk     default
	af3c01fc17b1c       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   c2daf4aff6ee1       metrics-server-85b7d694d7-qkx7f             kube-system
	21a4115e68c1d       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   7c7eca5deb916       coredns-66bc5c9577-8nftz                    kube-system
	8a8b73b00b16e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   7182518dde626       storage-provisioner                         kube-system
	c13caa4e33e4b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   8ca5ffc543117       kindnet-nmmrr                               kube-system
	c8f4c76b52ea3       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             2 minutes ago        Running             kube-proxy                               0                   6a043050176b9       kube-proxy-ssljd                            kube-system
	d3d6b4b5a780c       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago        Running             kube-apiserver                           0                   01c9e6e6c9ac7       kube-apiserver-addons-718596                kube-system
	b5b1a3ea57732       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago        Running             kube-scheduler                           0                   192c7a0ffac0d       kube-scheduler-addons-718596                kube-system
	3d8f28771c74b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago        Running             etcd                                     0                   8d0728e96919c       etcd-addons-718596                          kube-system
	c60014395decc       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago        Running             kube-controller-manager                  0                   843a6ca8c651e       kube-controller-manager-addons-718596       kube-system
	
	
	==> coredns [21a4115e68c1dc6a33076f076e3d38d44c1bd958e69b4c3ac7531a57eb042d50] <==
	[INFO] 10.244.0.16:52896 - 22482 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000067871s
	[INFO] 10.244.0.16:52896 - 52011 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001920684s
	[INFO] 10.244.0.16:52896 - 46349 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002055835s
	[INFO] 10.244.0.16:52896 - 55768 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000098122s
	[INFO] 10.244.0.16:52896 - 4261 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000108403s
	[INFO] 10.244.0.16:37267 - 59236 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000146088s
	[INFO] 10.244.0.16:37267 - 59047 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000206838s
	[INFO] 10.244.0.16:42280 - 9151 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00012515s
	[INFO] 10.244.0.16:42280 - 8723 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000159224s
	[INFO] 10.244.0.16:40332 - 27910 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00014183s
	[INFO] 10.244.0.16:40332 - 27688 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000233085s
	[INFO] 10.244.0.16:52570 - 56577 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001157415s
	[INFO] 10.244.0.16:52570 - 56380 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001163626s
	[INFO] 10.244.0.16:51849 - 27299 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000121359s
	[INFO] 10.244.0.16:51849 - 27487 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000132362s
	[INFO] 10.244.0.19:37279 - 15211 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000197952s
	[INFO] 10.244.0.19:54658 - 50539 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000120932s
	[INFO] 10.244.0.19:40844 - 46512 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.0001427s
	[INFO] 10.244.0.19:33349 - 38586 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000107714s
	[INFO] 10.244.0.19:46138 - 46502 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000214436s
	[INFO] 10.244.0.19:54137 - 18290 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000103218s
	[INFO] 10.244.0.19:37917 - 6226 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001876359s
	[INFO] 10.244.0.19:40127 - 58434 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002284306s
	[INFO] 10.244.0.19:58183 - 4327 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002294571s
	[INFO] 10.244.0.19:48371 - 42778 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001728596s
	
	
	==> describe nodes <==
	Name:               addons-718596
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-718596
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820
	                    minikube.k8s.io/name=addons-718596
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T08_30_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-718596
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-718596"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 08:30:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-718596
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 08:32:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 08:32:51 +0000   Sat, 18 Oct 2025 08:30:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 08:32:51 +0000   Sat, 18 Oct 2025 08:30:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 08:32:51 +0000   Sat, 18 Oct 2025 08:30:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 08:32:51 +0000   Sat, 18 Oct 2025 08:31:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-718596
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                dc9321bd-7d08-4a3c-9dd2-b8eede71a99c
	  Boot ID:                    65523ab2-bf15-4ba4-9086-da57024e96a9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     cloud-spanner-emulator-86bd5cbb97-8gkdk      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m6s
	  gadget                      gadget-bht4v                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  gcp-auth                    gcp-auth-78565c9fb4-ftmb2                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-jnjlc    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m3s
	  kube-system                 coredns-66bc5c9577-8nftz                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m9s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 csi-hostpathplugin-j45m4                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 etcd-addons-718596                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m14s
	  kube-system                 kindnet-nmmrr                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m10s
	  kube-system                 kube-apiserver-addons-718596                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m14s
	  kube-system                 kube-controller-manager-addons-718596        200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m15s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-proxy-ssljd                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 kube-scheduler-addons-718596                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m14s
	  kube-system                 metrics-server-85b7d694d7-qkx7f              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m4s
	  kube-system                 nvidia-device-plugin-daemonset-clntn         0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 registry-6b586f9694-6wmvl                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 registry-creds-764b6fb674-hhnk4              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 registry-proxy-pvgzm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 snapshot-controller-7d9fbc56b8-4c88f         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 snapshot-controller-7d9fbc56b8-m2jxk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	  local-path-storage          local-path-provisioner-648f6765c9-jb247      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-8568g               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 2m7s   kube-proxy       
	  Normal   Starting                 2m15s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m15s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m14s  kubelet          Node addons-718596 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m14s  kubelet          Node addons-718596 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m14s  kubelet          Node addons-718596 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m10s  node-controller  Node addons-718596 event: Registered Node addons-718596 in Controller
	  Normal   NodeReady                87s    kubelet          Node addons-718596 status is now: NodeReady
	
	
	==> dmesg <==
	[ +30.749123] overlayfs: idmapped layers are currently not supported
	[Oct18 08:05] overlayfs: idmapped layers are currently not supported
	[Oct18 08:06] overlayfs: idmapped layers are currently not supported
	[Oct18 08:08] overlayfs: idmapped layers are currently not supported
	[Oct18 08:09] overlayfs: idmapped layers are currently not supported
	[Oct18 08:10] overlayfs: idmapped layers are currently not supported
	[ +38.212735] overlayfs: idmapped layers are currently not supported
	[Oct18 08:11] overlayfs: idmapped layers are currently not supported
	[Oct18 08:12] overlayfs: idmapped layers are currently not supported
	[Oct18 08:13] overlayfs: idmapped layers are currently not supported
	[  +7.848314] overlayfs: idmapped layers are currently not supported
	[Oct18 08:14] overlayfs: idmapped layers are currently not supported
	[Oct18 08:15] overlayfs: idmapped layers are currently not supported
	[Oct18 08:16] overlayfs: idmapped layers are currently not supported
	[ +29.066776] overlayfs: idmapped layers are currently not supported
	[Oct18 08:17] overlayfs: idmapped layers are currently not supported
	[Oct18 08:18] overlayfs: idmapped layers are currently not supported
	[  +0.898927] overlayfs: idmapped layers are currently not supported
	[Oct18 08:20] overlayfs: idmapped layers are currently not supported
	[  +5.259921] overlayfs: idmapped layers are currently not supported
	[Oct18 08:22] overlayfs: idmapped layers are currently not supported
	[  +6.764143] overlayfs: idmapped layers are currently not supported
	[Oct18 08:24] overlayfs: idmapped layers are currently not supported
	[Oct18 08:29] kauditd_printk_skb: 8 callbacks suppressed
	[Oct18 08:30] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [3d8f28771c74beb97c58d5f8c30b538ff9c212922906d7c5984e69d87dfde8c8] <==
	{"level":"warn","ts":"2025-10-18T08:30:35.090997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:35.118270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:35.121638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:35.143914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:35.161159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:35.172987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:35.192303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:35.204186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:35.221869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:35.237345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:35.256215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:35.271320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:35.292732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:35.321656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:35.330454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:35.378266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:35.384827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:35.402819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:35.494482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:51.604219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:30:51.623632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:31:13.171059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:31:13.186742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:31:13.230524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:31:13.253755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40464","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [94be5aa873ec727ccddb1ea1b2875bc26b001adeb30bb66792f8fa88896103df] <==
	2025/10/18 08:32:18 GCP Auth Webhook started!
	2025/10/18 08:32:42 Ready to marshal response ...
	2025/10/18 08:32:42 Ready to write response ...
	2025/10/18 08:32:43 Ready to marshal response ...
	2025/10/18 08:32:43 Ready to write response ...
	2025/10/18 08:32:43 Ready to marshal response ...
	2025/10/18 08:32:43 Ready to write response ...
	
	
	==> kernel <==
	 08:32:53 up 10:15,  0 user,  load average: 2.08, 1.91, 2.42
	Linux addons-718596 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c13caa4e33e4b6e47382b70365473a8c332ce93c8e5eeb30bd76b72c8a77d787] <==
	E1018 08:31:15.617574       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1018 08:31:15.617687       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 08:31:15.617759       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 08:31:15.617847       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1018 08:31:17.117471       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 08:31:17.117574       1 metrics.go:72] Registering metrics
	I1018 08:31:17.117668       1 controller.go:711] "Syncing nftables rules"
	I1018 08:31:25.617993       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:31:25.618051       1 main.go:301] handling current node
	I1018 08:31:35.616442       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:31:35.616478       1 main.go:301] handling current node
	I1018 08:31:45.616739       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:31:45.616768       1 main.go:301] handling current node
	I1018 08:31:55.616350       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:31:55.616382       1 main.go:301] handling current node
	I1018 08:32:05.617038       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:32:05.617069       1 main.go:301] handling current node
	I1018 08:32:15.616850       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:32:15.616882       1 main.go:301] handling current node
	I1018 08:32:25.616180       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:32:25.616215       1 main.go:301] handling current node
	I1018 08:32:35.616810       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:32:35.616859       1 main.go:301] handling current node
	I1018 08:32:45.616470       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:32:45.616496       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d3d6b4b5a780c831e4df944a01c79fc26b915752ab82f96bfa50a8db6eeb10ca] <==
	W1018 08:30:51.618172       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1018 08:30:54.393948       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.110.55.23"}
	W1018 08:31:13.170846       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1018 08:31:13.185403       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1018 08:31:13.229617       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1018 08:31:13.246097       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1018 08:31:26.150028       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.55.23:443: connect: connection refused
	E1018 08:31:26.150152       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.55.23:443: connect: connection refused" logger="UnhandledError"
	W1018 08:31:26.170229       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.55.23:443: connect: connection refused
	E1018 08:31:26.170407       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.55.23:443: connect: connection refused" logger="UnhandledError"
	W1018 08:31:26.218675       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.55.23:443: connect: connection refused
	E1018 08:31:26.218810       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.55.23:443: connect: connection refused" logger="UnhandledError"
	E1018 08:31:32.593365       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.184.249:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.184.249:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.184.249:443: connect: connection refused" logger="UnhandledError"
	W1018 08:31:32.593973       1 handler_proxy.go:99] no RequestInfo found in the context
	E1018 08:31:32.594027       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1018 08:31:32.595647       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.184.249:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.184.249:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.184.249:443: connect: connection refused" logger="UnhandledError"
	E1018 08:31:32.600383       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.184.249:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.184.249:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.184.249:443: connect: connection refused" logger="UnhandledError"
	E1018 08:31:32.621490       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.184.249:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.184.249:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.184.249:443: connect: connection refused" logger="UnhandledError"
	I1018 08:31:32.798500       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1018 08:32:51.526208       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:56388: use of closed network connection
	E1018 08:32:51.758072       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:56404: use of closed network connection
	E1018 08:32:51.893100       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:56424: use of closed network connection
	
	
	==> kube-controller-manager [c60014395decc7e3a08e95d661ec9973575293fea6d721129b4f610cd9045de2] <==
	I1018 08:30:43.165330       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 08:30:43.165405       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-718596"
	I1018 08:30:43.165448       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1018 08:30:43.173317       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 08:30:43.181022       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 08:30:43.183639       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 08:30:43.189299       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 08:30:43.199947       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 08:30:43.202136       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 08:30:43.202437       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 08:30:43.202598       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 08:30:43.203626       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 08:30:43.203782       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 08:30:43.203798       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 08:30:43.203807       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 08:30:43.207913       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	E1018 08:30:49.707356       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1018 08:31:13.164181       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1018 08:31:13.164343       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1018 08:31:13.164402       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1018 08:31:13.215003       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1018 08:31:13.221478       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1018 08:31:13.265371       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 08:31:13.322468       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 08:31:28.173028       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [c8f4c76b52ea3dbdf50b9feeeef868087ebe9f8d3a8596d5c5b2c3f5fe5faad5] <==
	I1018 08:30:45.799966       1 server_linux.go:53] "Using iptables proxy"
	I1018 08:30:45.894131       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 08:30:45.994999       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 08:30:45.995047       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 08:30:45.995144       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 08:30:46.043364       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 08:30:46.046826       1 server_linux.go:132] "Using iptables Proxier"
	I1018 08:30:46.052100       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 08:30:46.052395       1 server.go:527] "Version info" version="v1.34.1"
	I1018 08:30:46.052409       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 08:30:46.057815       1 config.go:200] "Starting service config controller"
	I1018 08:30:46.057833       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 08:30:46.065844       1 config.go:106] "Starting endpoint slice config controller"
	I1018 08:30:46.065865       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 08:30:46.065884       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 08:30:46.065888       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 08:30:46.073515       1 config.go:309] "Starting node config controller"
	I1018 08:30:46.076566       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 08:30:46.076586       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 08:30:46.166657       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 08:30:46.166697       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 08:30:46.166728       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [b5b1a3ea57732699826bd43ca767e85f48ee04112206faf1a4a459e8820cf3d8] <==
	E1018 08:30:36.326150       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 08:30:36.326272       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 08:30:36.326358       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 08:30:36.326521       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 08:30:36.331295       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 08:30:36.331350       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 08:30:36.331404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 08:30:36.331475       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 08:30:36.331525       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 08:30:36.331574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 08:30:36.331621       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 08:30:36.331702       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 08:30:36.331752       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 08:30:36.331820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 08:30:36.331902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 08:30:36.332005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 08:30:36.332069       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 08:30:37.111506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 08:30:37.225068       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 08:30:37.230414       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 08:30:37.322951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 08:30:37.362449       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 08:30:37.383983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 08:30:37.453836       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1018 08:30:39.596337       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 08:31:59 addons-718596 kubelet[1288]: I1018 08:31:59.787303    1288 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7zhdb\" (UniqueName: \"kubernetes.io/projected/b3593095-8d2b-4a52-9a5b-97ba1d23f792-kube-api-access-7zhdb\") on node \"addons-718596\" DevicePath \"\""
	Oct 18 08:32:00 addons-718596 kubelet[1288]: I1018 08:32:00.593819    1288 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-clntn" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 08:32:00 addons-718596 kubelet[1288]: I1018 08:32:00.594524    1288 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39d26ec8d909dd3b7ff1e9f5019abb9fc5573070e4d77b974ed99b2ade1eeade"
	Oct 18 08:32:05 addons-718596 kubelet[1288]: I1018 08:32:05.628195    1288 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-pvgzm" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 08:32:05 addons-718596 kubelet[1288]: I1018 08:32:05.642333    1288 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpath-resizer-0" podStartSLOduration=40.76602868 podStartE2EDuration="1m14.642306231s" podCreationTimestamp="2025-10-18 08:30:51 +0000 UTC" firstStartedPulling="2025-10-18 08:31:27.059007075 +0000 UTC m=+48.220605195" lastFinishedPulling="2025-10-18 08:32:00.935284618 +0000 UTC m=+82.096882746" observedRunningTime="2025-10-18 08:32:01.633543803 +0000 UTC m=+82.795141923" watchObservedRunningTime="2025-10-18 08:32:05.642306231 +0000 UTC m=+86.803904351"
	Oct 18 08:32:06 addons-718596 kubelet[1288]: I1018 08:32:06.637937    1288 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-pvgzm" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 08:32:10 addons-718596 kubelet[1288]: I1018 08:32:10.079685    1288 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-pvgzm" podStartSLOduration=5.707804114 podStartE2EDuration="44.079666244s" podCreationTimestamp="2025-10-18 08:31:26 +0000 UTC" firstStartedPulling="2025-10-18 08:31:27.110182982 +0000 UTC m=+48.271781102" lastFinishedPulling="2025-10-18 08:32:05.482045096 +0000 UTC m=+86.643643232" observedRunningTime="2025-10-18 08:32:05.642969753 +0000 UTC m=+86.804567881" watchObservedRunningTime="2025-10-18 08:32:10.079666244 +0000 UTC m=+91.241264364"
	Oct 18 08:32:10 addons-718596 kubelet[1288]: I1018 08:32:10.964659    1288 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="747d9463-8949-4a51-83d7-eaa4f3a1a0b8" path="/var/lib/kubelet/pods/747d9463-8949-4a51-83d7-eaa4f3a1a0b8/volumes"
	Oct 18 08:32:14 addons-718596 kubelet[1288]: I1018 08:32:14.698069    1288 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-ingress-dns-minikube" podStartSLOduration=40.411287175 podStartE2EDuration="1m25.698048469s" podCreationTimestamp="2025-10-18 08:30:49 +0000 UTC" firstStartedPulling="2025-10-18 08:31:28.559354382 +0000 UTC m=+49.720952502" lastFinishedPulling="2025-10-18 08:32:13.846115676 +0000 UTC m=+95.007713796" observedRunningTime="2025-10-18 08:32:14.69475595 +0000 UTC m=+95.856354078" watchObservedRunningTime="2025-10-18 08:32:14.698048469 +0000 UTC m=+95.859646589"
	Oct 18 08:32:16 addons-718596 kubelet[1288]: I1018 08:32:16.024906    1288 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="local-path-storage/local-path-provisioner-648f6765c9-jb247" podStartSLOduration=39.596417347 podStartE2EDuration="1m26.024884423s" podCreationTimestamp="2025-10-18 08:30:50 +0000 UTC" firstStartedPulling="2025-10-18 08:31:28.611021804 +0000 UTC m=+49.772619924" lastFinishedPulling="2025-10-18 08:32:15.03948888 +0000 UTC m=+96.201087000" observedRunningTime="2025-10-18 08:32:15.700605143 +0000 UTC m=+96.862203271" watchObservedRunningTime="2025-10-18 08:32:16.024884423 +0000 UTC m=+97.186482543"
	Oct 18 08:32:16 addons-718596 kubelet[1288]: I1018 08:32:16.965455    1288 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f587054b-f318-4251-b82e-76cd01716d5a" path="/var/lib/kubelet/pods/f587054b-f318-4251-b82e-76cd01716d5a/volumes"
	Oct 18 08:32:22 addons-718596 kubelet[1288]: I1018 08:32:22.749212    1288 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-ftmb2" podStartSLOduration=53.291350983 podStartE2EDuration="1m28.749065146s" podCreationTimestamp="2025-10-18 08:30:54 +0000 UTC" firstStartedPulling="2025-10-18 08:31:42.718235462 +0000 UTC m=+63.879833581" lastFinishedPulling="2025-10-18 08:32:18.175949608 +0000 UTC m=+99.337547744" observedRunningTime="2025-10-18 08:32:18.721393568 +0000 UTC m=+99.882991697" watchObservedRunningTime="2025-10-18 08:32:22.749065146 +0000 UTC m=+103.910663274"
	Oct 18 08:32:24 addons-718596 kubelet[1288]: I1018 08:32:24.697882    1288 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-bht4v" podStartSLOduration=70.420956853 podStartE2EDuration="1m34.697860956s" podCreationTimestamp="2025-10-18 08:30:50 +0000 UTC" firstStartedPulling="2025-10-18 08:31:57.636176013 +0000 UTC m=+78.797774141" lastFinishedPulling="2025-10-18 08:32:21.913080124 +0000 UTC m=+103.074678244" observedRunningTime="2025-10-18 08:32:22.750310562 +0000 UTC m=+103.911908682" watchObservedRunningTime="2025-10-18 08:32:24.697860956 +0000 UTC m=+105.859459084"
	Oct 18 08:32:30 addons-718596 kubelet[1288]: E1018 08:32:30.105885    1288 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 18 08:32:30 addons-718596 kubelet[1288]: E1018 08:32:30.105984    1288 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5eef93e4-a2d3-4b3d-a3fc-46aaa1ecd9b6-gcr-creds podName:5eef93e4-a2d3-4b3d-a3fc-46aaa1ecd9b6 nodeName:}" failed. No retries permitted until 2025-10-18 08:33:34.105964378 +0000 UTC m=+175.267562506 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/5eef93e4-a2d3-4b3d-a3fc-46aaa1ecd9b6-gcr-creds") pod "registry-creds-764b6fb674-hhnk4" (UID: "5eef93e4-a2d3-4b3d-a3fc-46aaa1ecd9b6") : secret "registry-creds-gcr" not found
	Oct 18 08:32:32 addons-718596 kubelet[1288]: I1018 08:32:32.200565    1288 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Oct 18 08:32:32 addons-718596 kubelet[1288]: I1018 08:32:32.200620    1288 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Oct 18 08:32:35 addons-718596 kubelet[1288]: I1018 08:32:35.826285    1288 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-jnjlc" podStartSLOduration=76.269948763 podStartE2EDuration="1m45.826266071s" podCreationTimestamp="2025-10-18 08:30:50 +0000 UTC" firstStartedPulling="2025-10-18 08:31:59.295872831 +0000 UTC m=+80.457470959" lastFinishedPulling="2025-10-18 08:32:28.852190139 +0000 UTC m=+110.013788267" observedRunningTime="2025-10-18 08:32:29.785542926 +0000 UTC m=+110.947141054" watchObservedRunningTime="2025-10-18 08:32:35.826266071 +0000 UTC m=+116.987864190"
	Oct 18 08:32:38 addons-718596 kubelet[1288]: I1018 08:32:38.985775    1288 scope.go:117] "RemoveContainer" containerID="21d55c6335a4267e057d7361e86fbcbeb7acf14236faa7d8a22c9f2384abb73c"
	Oct 18 08:32:38 addons-718596 kubelet[1288]: I1018 08:32:38.996782    1288 scope.go:117] "RemoveContainer" containerID="ba15553f165187f03fb7bf9a147fb18f120f63c5be51d4d5a2eaebcc54be3474"
	Oct 18 08:32:39 addons-718596 kubelet[1288]: E1018 08:32:39.121156    1288 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/00ae8ba9173f852c52bc1c8d48252a0510ef311f3fabcc85c60904a083873f0a/diff" to get inode usage: stat /var/lib/containers/storage/overlay/00ae8ba9173f852c52bc1c8d48252a0510ef311f3fabcc85c60904a083873f0a/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/ingress-nginx_ingress-nginx-admission-patch-mt9m7_b3593095-8d2b-4a52-9a5b-97ba1d23f792/patch/1.log" to get inode usage: stat /var/log/pods/ingress-nginx_ingress-nginx-admission-patch-mt9m7_b3593095-8d2b-4a52-9a5b-97ba1d23f792/patch/1.log: no such file or directory
	Oct 18 08:32:39 addons-718596 kubelet[1288]: E1018 08:32:39.133884    1288 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/2a1d993877b7b756910148ccb595b1775c809e5f19d1cf6aa6023a960bdf5e6b/diff" to get inode usage: stat /var/lib/containers/storage/overlay/2a1d993877b7b756910148ccb595b1775c809e5f19d1cf6aa6023a960bdf5e6b/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/gcp-auth_gcp-auth-certs-patch-x2lxw_f587054b-f318-4251-b82e-76cd01716d5a/patch/1.log" to get inode usage: stat /var/log/pods/gcp-auth_gcp-auth-certs-patch-x2lxw_f587054b-f318-4251-b82e-76cd01716d5a/patch/1.log: no such file or directory
	Oct 18 08:32:40 addons-718596 kubelet[1288]: I1018 08:32:40.799787    1288 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-j45m4" podStartSLOduration=6.483255978 podStartE2EDuration="1m14.799769841s" podCreationTimestamp="2025-10-18 08:31:26 +0000 UTC" firstStartedPulling="2025-10-18 08:31:27.059064665 +0000 UTC m=+48.220662785" lastFinishedPulling="2025-10-18 08:32:35.375578528 +0000 UTC m=+116.537176648" observedRunningTime="2025-10-18 08:32:35.827646203 +0000 UTC m=+116.989244331" watchObservedRunningTime="2025-10-18 08:32:40.799769841 +0000 UTC m=+121.961367969"
	Oct 18 08:32:43 addons-718596 kubelet[1288]: I1018 08:32:43.323720    1288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvbzm\" (UniqueName: \"kubernetes.io/projected/0d911a7b-137f-4786-84c4-787c87e49cd2-kube-api-access-hvbzm\") pod \"busybox\" (UID: \"0d911a7b-137f-4786-84c4-787c87e49cd2\") " pod="default/busybox"
	Oct 18 08:32:43 addons-718596 kubelet[1288]: I1018 08:32:43.324438    1288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/0d911a7b-137f-4786-84c4-787c87e49cd2-gcp-creds\") pod \"busybox\" (UID: \"0d911a7b-137f-4786-84c4-787c87e49cd2\") " pod="default/busybox"
	
	
	==> storage-provisioner [8a8b73b00b16ea37452d2de32d3703276b55fbacd22ec17ae9cc096aec490df7] <==
	W1018 08:32:29.157672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:32:31.161102       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:32:31.168073       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:32:33.172152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:32:33.178987       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:32:35.181953       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:32:35.186689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:32:37.190484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:32:37.194939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:32:39.198065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:32:39.203003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:32:41.205994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:32:41.210877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:32:43.215831       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:32:43.224664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:32:45.230596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:32:45.248953       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:32:47.253091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:32:47.260943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:32:49.264063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:32:49.271256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:32:51.274808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:32:51.279217       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:32:53.283126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:32:53.288425       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-718596 -n addons-718596
helpers_test.go:269: (dbg) Run:  kubectl --context addons-718596 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-vfgl2 ingress-nginx-admission-patch-mt9m7 registry-creds-764b6fb674-hhnk4
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-718596 describe pod ingress-nginx-admission-create-vfgl2 ingress-nginx-admission-patch-mt9m7 registry-creds-764b6fb674-hhnk4
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-718596 describe pod ingress-nginx-admission-create-vfgl2 ingress-nginx-admission-patch-mt9m7 registry-creds-764b6fb674-hhnk4: exit status 1 (87.818999ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-vfgl2" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-mt9m7" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-hhnk4" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-718596 describe pod ingress-nginx-admission-create-vfgl2 ingress-nginx-admission-patch-mt9m7 registry-creds-764b6fb674-hhnk4: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-718596 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-718596 addons disable headlamp --alsologtostderr -v=1: exit status 11 (270.944975ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 08:32:55.200906 1283561 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:32:55.202530 1283561 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:32:55.202589 1283561 out.go:374] Setting ErrFile to fd 2...
	I1018 08:32:55.202610 1283561 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:32:55.203027 1283561 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 08:32:55.203417 1283561 mustload.go:65] Loading cluster: addons-718596
	I1018 08:32:55.204000 1283561 config.go:182] Loaded profile config "addons-718596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:32:55.204058 1283561 addons.go:606] checking whether the cluster is paused
	I1018 08:32:55.204322 1283561 config.go:182] Loaded profile config "addons-718596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:32:55.204369 1283561 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:32:55.204861 1283561 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:32:55.228663 1283561 ssh_runner.go:195] Run: systemctl --version
	I1018 08:32:55.228720 1283561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:32:55.247133 1283561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:32:55.350783 1283561 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 08:32:55.350884 1283561 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 08:32:55.379381 1283561 cri.go:89] found id: "f2ba69481cca4d0f5a25c5beaad63e76454385180b30477c08c5dd39ec11f960"
	I1018 08:32:55.379400 1283561 cri.go:89] found id: "fee77718765ce01862d3826830d1cd1b63df4294ae5007413d147c5b6f353493"
	I1018 08:32:55.379406 1283561 cri.go:89] found id: "2b38f5de44735a72df017a6ba528430e174db28f03821d00d635d4b6bf461eb9"
	I1018 08:32:55.379410 1283561 cri.go:89] found id: "60f3656a31e3365232ea3a50fda1e9925de06d7f546e566ed63caa3b6ddc6a26"
	I1018 08:32:55.379413 1283561 cri.go:89] found id: "69416bfe918f8c0355a6e848e7c20cc2a0644f592e4af4ce241f48fa38460e38"
	I1018 08:32:55.379417 1283561 cri.go:89] found id: "28a70a60eb5fbd7784b4dc8b229decc07e1ead52bd5c852f7c9fdb71b87298e6"
	I1018 08:32:55.379422 1283561 cri.go:89] found id: "9b6001d8d045befc23b48c768a68764d88dbf272340c086878656541cf475eae"
	I1018 08:32:55.379431 1283561 cri.go:89] found id: "20dde0b8d894a22ef9350e1d77e8944af9e5892634e205a207e3b8f888c608b1"
	I1018 08:32:55.379435 1283561 cri.go:89] found id: "1a2a1784a32edcacf12954ced37433e7d2a873d907689eb4a650272d4227e914"
	I1018 08:32:55.379441 1283561 cri.go:89] found id: "41255395d4ccf30b4f83f7286e24fddc9877529056e3b6550176417a18c6ec37"
	I1018 08:32:55.379444 1283561 cri.go:89] found id: "df452f4c1f840a0419fcac5bf2f09de27afcb9908345e644113dc8d6933a8ea1"
	I1018 08:32:55.379447 1283561 cri.go:89] found id: "b7afa4a4426cb6b1ac0da51b0491d588ab2af4410fd28370bf8fa39172e5f813"
	I1018 08:32:55.379450 1283561 cri.go:89] found id: "73bba38c93d1dd848863605a0e040b6cc93f6c2ae44da4b521ffd096fc0e7cef"
	I1018 08:32:55.379453 1283561 cri.go:89] found id: "3341cb4941ef2d1b3b62f6350adf989a06c0a11fdca10b1d2a4d0be1a73e4dac"
	I1018 08:32:55.379456 1283561 cri.go:89] found id: "af3c01fc17b1cf65bfc660209278c4a4b5768989098968a12a530d54cf2e3e99"
	I1018 08:32:55.379461 1283561 cri.go:89] found id: "21a4115e68c1dc6a33076f076e3d38d44c1bd958e69b4c3ac7531a57eb042d50"
	I1018 08:32:55.379464 1283561 cri.go:89] found id: "8a8b73b00b16ea37452d2de32d3703276b55fbacd22ec17ae9cc096aec490df7"
	I1018 08:32:55.379467 1283561 cri.go:89] found id: "c13caa4e33e4b6e47382b70365473a8c332ce93c8e5eeb30bd76b72c8a77d787"
	I1018 08:32:55.379471 1283561 cri.go:89] found id: "c8f4c76b52ea3dbdf50b9feeeef868087ebe9f8d3a8596d5c5b2c3f5fe5faad5"
	I1018 08:32:55.379473 1283561 cri.go:89] found id: "d3d6b4b5a780c831e4df944a01c79fc26b915752ab82f96bfa50a8db6eeb10ca"
	I1018 08:32:55.379478 1283561 cri.go:89] found id: "b5b1a3ea57732699826bd43ca767e85f48ee04112206faf1a4a459e8820cf3d8"
	I1018 08:32:55.379481 1283561 cri.go:89] found id: "3d8f28771c74beb97c58d5f8c30b538ff9c212922906d7c5984e69d87dfde8c8"
	I1018 08:32:55.379484 1283561 cri.go:89] found id: "c60014395decc7e3a08e95d661ec9973575293fea6d721129b4f610cd9045de2"
	I1018 08:32:55.379487 1283561 cri.go:89] found id: ""
	I1018 08:32:55.379537 1283561 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 08:32:55.394500 1283561 out.go:203] 
	W1018 08:32:55.397511 1283561 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:32:55Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:32:55Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 08:32:55.397536 1283561 out.go:285] * 
	* 
	W1018 08:32:55.406767 1283561 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 08:32:55.409891 1283561 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-718596 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.25s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-8gkdk" [23889536-4c18-4aea-9b42-d1253c747fae] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.0036792s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-718596 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-718596 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (256.722107ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 08:34:20.679749 1285563 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:34:20.681210 1285563 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:34:20.681256 1285563 out.go:374] Setting ErrFile to fd 2...
	I1018 08:34:20.681275 1285563 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:34:20.681600 1285563 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 08:34:20.681945 1285563 mustload.go:65] Loading cluster: addons-718596
	I1018 08:34:20.682391 1285563 config.go:182] Loaded profile config "addons-718596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:34:20.682428 1285563 addons.go:606] checking whether the cluster is paused
	I1018 08:34:20.682574 1285563 config.go:182] Loaded profile config "addons-718596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:34:20.682609 1285563 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:34:20.683079 1285563 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:34:20.700032 1285563 ssh_runner.go:195] Run: systemctl --version
	I1018 08:34:20.700090 1285563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:34:20.717514 1285563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:34:20.818115 1285563 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 08:34:20.818208 1285563 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 08:34:20.852717 1285563 cri.go:89] found id: "f2ba69481cca4d0f5a25c5beaad63e76454385180b30477c08c5dd39ec11f960"
	I1018 08:34:20.852742 1285563 cri.go:89] found id: "fee77718765ce01862d3826830d1cd1b63df4294ae5007413d147c5b6f353493"
	I1018 08:34:20.852748 1285563 cri.go:89] found id: "2b38f5de44735a72df017a6ba528430e174db28f03821d00d635d4b6bf461eb9"
	I1018 08:34:20.852752 1285563 cri.go:89] found id: "60f3656a31e3365232ea3a50fda1e9925de06d7f546e566ed63caa3b6ddc6a26"
	I1018 08:34:20.852756 1285563 cri.go:89] found id: "69416bfe918f8c0355a6e848e7c20cc2a0644f592e4af4ce241f48fa38460e38"
	I1018 08:34:20.852760 1285563 cri.go:89] found id: "28a70a60eb5fbd7784b4dc8b229decc07e1ead52bd5c852f7c9fdb71b87298e6"
	I1018 08:34:20.852763 1285563 cri.go:89] found id: "9b6001d8d045befc23b48c768a68764d88dbf272340c086878656541cf475eae"
	I1018 08:34:20.852767 1285563 cri.go:89] found id: "20dde0b8d894a22ef9350e1d77e8944af9e5892634e205a207e3b8f888c608b1"
	I1018 08:34:20.852771 1285563 cri.go:89] found id: "1a2a1784a32edcacf12954ced37433e7d2a873d907689eb4a650272d4227e914"
	I1018 08:34:20.852780 1285563 cri.go:89] found id: "41255395d4ccf30b4f83f7286e24fddc9877529056e3b6550176417a18c6ec37"
	I1018 08:34:20.852783 1285563 cri.go:89] found id: "df452f4c1f840a0419fcac5bf2f09de27afcb9908345e644113dc8d6933a8ea1"
	I1018 08:34:20.852787 1285563 cri.go:89] found id: "b7afa4a4426cb6b1ac0da51b0491d588ab2af4410fd28370bf8fa39172e5f813"
	I1018 08:34:20.852790 1285563 cri.go:89] found id: "73bba38c93d1dd848863605a0e040b6cc93f6c2ae44da4b521ffd096fc0e7cef"
	I1018 08:34:20.852794 1285563 cri.go:89] found id: "3341cb4941ef2d1b3b62f6350adf989a06c0a11fdca10b1d2a4d0be1a73e4dac"
	I1018 08:34:20.852797 1285563 cri.go:89] found id: "af3c01fc17b1cf65bfc660209278c4a4b5768989098968a12a530d54cf2e3e99"
	I1018 08:34:20.852805 1285563 cri.go:89] found id: "21a4115e68c1dc6a33076f076e3d38d44c1bd958e69b4c3ac7531a57eb042d50"
	I1018 08:34:20.852812 1285563 cri.go:89] found id: "8a8b73b00b16ea37452d2de32d3703276b55fbacd22ec17ae9cc096aec490df7"
	I1018 08:34:20.852820 1285563 cri.go:89] found id: "c13caa4e33e4b6e47382b70365473a8c332ce93c8e5eeb30bd76b72c8a77d787"
	I1018 08:34:20.852823 1285563 cri.go:89] found id: "c8f4c76b52ea3dbdf50b9feeeef868087ebe9f8d3a8596d5c5b2c3f5fe5faad5"
	I1018 08:34:20.852827 1285563 cri.go:89] found id: "d3d6b4b5a780c831e4df944a01c79fc26b915752ab82f96bfa50a8db6eeb10ca"
	I1018 08:34:20.852832 1285563 cri.go:89] found id: "b5b1a3ea57732699826bd43ca767e85f48ee04112206faf1a4a459e8820cf3d8"
	I1018 08:34:20.852837 1285563 cri.go:89] found id: "3d8f28771c74beb97c58d5f8c30b538ff9c212922906d7c5984e69d87dfde8c8"
	I1018 08:34:20.852841 1285563 cri.go:89] found id: "c60014395decc7e3a08e95d661ec9973575293fea6d721129b4f610cd9045de2"
	I1018 08:34:20.852844 1285563 cri.go:89] found id: ""
	I1018 08:34:20.852897 1285563 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 08:34:20.869424 1285563 out.go:203] 
	W1018 08:34:20.872359 1285563 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:34:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:34:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 08:34:20.872380 1285563 out.go:285] * 
	* 
	W1018 08:34:20.881709 1285563 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 08:34:20.884593 1285563 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-718596 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.27s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.39s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-718596 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-718596 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718596 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718596 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718596 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718596 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-718596 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [d1e1b5a0-1403-4251-a8c5-e667f7703d20] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [d1e1b5a0-1403-4251-a8c5-e667f7703d20] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [d1e1b5a0-1403-4251-a8c5-e667f7703d20] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003492323s
addons_test.go:967: (dbg) Run:  kubectl --context addons-718596 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-718596 ssh "cat /opt/local-path-provisioner/pvc-de02ca27-1646-44a3-877b-65be44ed9287_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-718596 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-718596 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-718596 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-718596 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (267.837367ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 08:34:15.402001 1285457 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:34:15.403332 1285457 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:34:15.403344 1285457 out.go:374] Setting ErrFile to fd 2...
	I1018 08:34:15.403350 1285457 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:34:15.403625 1285457 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 08:34:15.403951 1285457 mustload.go:65] Loading cluster: addons-718596
	I1018 08:34:15.404315 1285457 config.go:182] Loaded profile config "addons-718596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:34:15.404333 1285457 addons.go:606] checking whether the cluster is paused
	I1018 08:34:15.404435 1285457 config.go:182] Loaded profile config "addons-718596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:34:15.404452 1285457 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:34:15.404884 1285457 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:34:15.421745 1285457 ssh_runner.go:195] Run: systemctl --version
	I1018 08:34:15.421801 1285457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:34:15.439159 1285457 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:34:15.542028 1285457 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 08:34:15.542126 1285457 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 08:34:15.580136 1285457 cri.go:89] found id: "f2ba69481cca4d0f5a25c5beaad63e76454385180b30477c08c5dd39ec11f960"
	I1018 08:34:15.580160 1285457 cri.go:89] found id: "fee77718765ce01862d3826830d1cd1b63df4294ae5007413d147c5b6f353493"
	I1018 08:34:15.580165 1285457 cri.go:89] found id: "2b38f5de44735a72df017a6ba528430e174db28f03821d00d635d4b6bf461eb9"
	I1018 08:34:15.580173 1285457 cri.go:89] found id: "60f3656a31e3365232ea3a50fda1e9925de06d7f546e566ed63caa3b6ddc6a26"
	I1018 08:34:15.580178 1285457 cri.go:89] found id: "69416bfe918f8c0355a6e848e7c20cc2a0644f592e4af4ce241f48fa38460e38"
	I1018 08:34:15.580181 1285457 cri.go:89] found id: "28a70a60eb5fbd7784b4dc8b229decc07e1ead52bd5c852f7c9fdb71b87298e6"
	I1018 08:34:15.580184 1285457 cri.go:89] found id: "9b6001d8d045befc23b48c768a68764d88dbf272340c086878656541cf475eae"
	I1018 08:34:15.580187 1285457 cri.go:89] found id: "20dde0b8d894a22ef9350e1d77e8944af9e5892634e205a207e3b8f888c608b1"
	I1018 08:34:15.580191 1285457 cri.go:89] found id: "1a2a1784a32edcacf12954ced37433e7d2a873d907689eb4a650272d4227e914"
	I1018 08:34:15.580197 1285457 cri.go:89] found id: "41255395d4ccf30b4f83f7286e24fddc9877529056e3b6550176417a18c6ec37"
	I1018 08:34:15.580201 1285457 cri.go:89] found id: "df452f4c1f840a0419fcac5bf2f09de27afcb9908345e644113dc8d6933a8ea1"
	I1018 08:34:15.580204 1285457 cri.go:89] found id: "b7afa4a4426cb6b1ac0da51b0491d588ab2af4410fd28370bf8fa39172e5f813"
	I1018 08:34:15.580207 1285457 cri.go:89] found id: "73bba38c93d1dd848863605a0e040b6cc93f6c2ae44da4b521ffd096fc0e7cef"
	I1018 08:34:15.580210 1285457 cri.go:89] found id: "3341cb4941ef2d1b3b62f6350adf989a06c0a11fdca10b1d2a4d0be1a73e4dac"
	I1018 08:34:15.580217 1285457 cri.go:89] found id: "af3c01fc17b1cf65bfc660209278c4a4b5768989098968a12a530d54cf2e3e99"
	I1018 08:34:15.580225 1285457 cri.go:89] found id: "21a4115e68c1dc6a33076f076e3d38d44c1bd958e69b4c3ac7531a57eb042d50"
	I1018 08:34:15.580229 1285457 cri.go:89] found id: "8a8b73b00b16ea37452d2de32d3703276b55fbacd22ec17ae9cc096aec490df7"
	I1018 08:34:15.580233 1285457 cri.go:89] found id: "c13caa4e33e4b6e47382b70365473a8c332ce93c8e5eeb30bd76b72c8a77d787"
	I1018 08:34:15.580236 1285457 cri.go:89] found id: "c8f4c76b52ea3dbdf50b9feeeef868087ebe9f8d3a8596d5c5b2c3f5fe5faad5"
	I1018 08:34:15.580238 1285457 cri.go:89] found id: "d3d6b4b5a780c831e4df944a01c79fc26b915752ab82f96bfa50a8db6eeb10ca"
	I1018 08:34:15.580243 1285457 cri.go:89] found id: "b5b1a3ea57732699826bd43ca767e85f48ee04112206faf1a4a459e8820cf3d8"
	I1018 08:34:15.580250 1285457 cri.go:89] found id: "3d8f28771c74beb97c58d5f8c30b538ff9c212922906d7c5984e69d87dfde8c8"
	I1018 08:34:15.580253 1285457 cri.go:89] found id: "c60014395decc7e3a08e95d661ec9973575293fea6d721129b4f610cd9045de2"
	I1018 08:34:15.580256 1285457 cri.go:89] found id: ""
	I1018 08:34:15.580310 1285457 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 08:34:15.595345 1285457 out.go:203] 
	W1018 08:34:15.598804 1285457 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:34:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:34:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 08:34:15.598833 1285457 out.go:285] * 
	* 
	W1018 08:34:15.607703 1285457 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 08:34:15.612005 1285457 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-718596 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.39s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-clntn" [5a62626d-1b58-4367-8529-c9f2ba9bae45] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003996833s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-718596 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-718596 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (265.125454ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 08:34:07.015912 1285153 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:34:07.017559 1285153 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:34:07.017577 1285153 out.go:374] Setting ErrFile to fd 2...
	I1018 08:34:07.017584 1285153 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:34:07.017928 1285153 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 08:34:07.019047 1285153 mustload.go:65] Loading cluster: addons-718596
	I1018 08:34:07.019453 1285153 config.go:182] Loaded profile config "addons-718596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:34:07.019471 1285153 addons.go:606] checking whether the cluster is paused
	I1018 08:34:07.019576 1285153 config.go:182] Loaded profile config "addons-718596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:34:07.019595 1285153 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:34:07.020094 1285153 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:34:07.037605 1285153 ssh_runner.go:195] Run: systemctl --version
	I1018 08:34:07.037662 1285153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:34:07.054891 1285153 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:34:07.158354 1285153 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 08:34:07.158442 1285153 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 08:34:07.193337 1285153 cri.go:89] found id: "f2ba69481cca4d0f5a25c5beaad63e76454385180b30477c08c5dd39ec11f960"
	I1018 08:34:07.193365 1285153 cri.go:89] found id: "fee77718765ce01862d3826830d1cd1b63df4294ae5007413d147c5b6f353493"
	I1018 08:34:07.193371 1285153 cri.go:89] found id: "2b38f5de44735a72df017a6ba528430e174db28f03821d00d635d4b6bf461eb9"
	I1018 08:34:07.193374 1285153 cri.go:89] found id: "60f3656a31e3365232ea3a50fda1e9925de06d7f546e566ed63caa3b6ddc6a26"
	I1018 08:34:07.193378 1285153 cri.go:89] found id: "69416bfe918f8c0355a6e848e7c20cc2a0644f592e4af4ce241f48fa38460e38"
	I1018 08:34:07.193381 1285153 cri.go:89] found id: "28a70a60eb5fbd7784b4dc8b229decc07e1ead52bd5c852f7c9fdb71b87298e6"
	I1018 08:34:07.193384 1285153 cri.go:89] found id: "9b6001d8d045befc23b48c768a68764d88dbf272340c086878656541cf475eae"
	I1018 08:34:07.193388 1285153 cri.go:89] found id: "20dde0b8d894a22ef9350e1d77e8944af9e5892634e205a207e3b8f888c608b1"
	I1018 08:34:07.193390 1285153 cri.go:89] found id: "1a2a1784a32edcacf12954ced37433e7d2a873d907689eb4a650272d4227e914"
	I1018 08:34:07.193397 1285153 cri.go:89] found id: "41255395d4ccf30b4f83f7286e24fddc9877529056e3b6550176417a18c6ec37"
	I1018 08:34:07.193401 1285153 cri.go:89] found id: "df452f4c1f840a0419fcac5bf2f09de27afcb9908345e644113dc8d6933a8ea1"
	I1018 08:34:07.193404 1285153 cri.go:89] found id: "b7afa4a4426cb6b1ac0da51b0491d588ab2af4410fd28370bf8fa39172e5f813"
	I1018 08:34:07.193407 1285153 cri.go:89] found id: "73bba38c93d1dd848863605a0e040b6cc93f6c2ae44da4b521ffd096fc0e7cef"
	I1018 08:34:07.193410 1285153 cri.go:89] found id: "3341cb4941ef2d1b3b62f6350adf989a06c0a11fdca10b1d2a4d0be1a73e4dac"
	I1018 08:34:07.193414 1285153 cri.go:89] found id: "af3c01fc17b1cf65bfc660209278c4a4b5768989098968a12a530d54cf2e3e99"
	I1018 08:34:07.193422 1285153 cri.go:89] found id: "21a4115e68c1dc6a33076f076e3d38d44c1bd958e69b4c3ac7531a57eb042d50"
	I1018 08:34:07.193425 1285153 cri.go:89] found id: "8a8b73b00b16ea37452d2de32d3703276b55fbacd22ec17ae9cc096aec490df7"
	I1018 08:34:07.193430 1285153 cri.go:89] found id: "c13caa4e33e4b6e47382b70365473a8c332ce93c8e5eeb30bd76b72c8a77d787"
	I1018 08:34:07.193433 1285153 cri.go:89] found id: "c8f4c76b52ea3dbdf50b9feeeef868087ebe9f8d3a8596d5c5b2c3f5fe5faad5"
	I1018 08:34:07.193435 1285153 cri.go:89] found id: "d3d6b4b5a780c831e4df944a01c79fc26b915752ab82f96bfa50a8db6eeb10ca"
	I1018 08:34:07.193440 1285153 cri.go:89] found id: "b5b1a3ea57732699826bd43ca767e85f48ee04112206faf1a4a459e8820cf3d8"
	I1018 08:34:07.193450 1285153 cri.go:89] found id: "3d8f28771c74beb97c58d5f8c30b538ff9c212922906d7c5984e69d87dfde8c8"
	I1018 08:34:07.193454 1285153 cri.go:89] found id: "c60014395decc7e3a08e95d661ec9973575293fea6d721129b4f610cd9045de2"
	I1018 08:34:07.193456 1285153 cri.go:89] found id: ""
	I1018 08:34:07.193508 1285153 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 08:34:07.208216 1285153 out.go:203] 
	W1018 08:34:07.210938 1285153 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:34:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:34:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 08:34:07.210960 1285153 out.go:285] * 
	* 
	W1018 08:34:07.219950 1285153 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 08:34:07.222748 1285153 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-718596 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.27s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-8568g" [64998f89-c9cf-48a9-9fab-941cc1c549fc] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003447352s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-718596 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-718596 addons disable yakd --alsologtostderr -v=1: exit status 11 (278.80189ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 08:34:00.729111 1285094 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:34:00.730552 1285094 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:34:00.730598 1285094 out.go:374] Setting ErrFile to fd 2...
	I1018 08:34:00.730619 1285094 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:34:00.730899 1285094 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 08:34:00.731258 1285094 mustload.go:65] Loading cluster: addons-718596
	I1018 08:34:00.731695 1285094 config.go:182] Loaded profile config "addons-718596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:34:00.731740 1285094 addons.go:606] checking whether the cluster is paused
	I1018 08:34:00.731912 1285094 config.go:182] Loaded profile config "addons-718596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:34:00.731957 1285094 host.go:66] Checking if "addons-718596" exists ...
	I1018 08:34:00.732449 1285094 cli_runner.go:164] Run: docker container inspect addons-718596 --format={{.State.Status}}
	I1018 08:34:00.749827 1285094 ssh_runner.go:195] Run: systemctl --version
	I1018 08:34:00.749878 1285094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-718596
	I1018 08:34:00.768062 1285094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34591 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/addons-718596/id_rsa Username:docker}
	I1018 08:34:00.870504 1285094 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 08:34:00.870608 1285094 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 08:34:00.912206 1285094 cri.go:89] found id: "f2ba69481cca4d0f5a25c5beaad63e76454385180b30477c08c5dd39ec11f960"
	I1018 08:34:00.912226 1285094 cri.go:89] found id: "fee77718765ce01862d3826830d1cd1b63df4294ae5007413d147c5b6f353493"
	I1018 08:34:00.912230 1285094 cri.go:89] found id: "2b38f5de44735a72df017a6ba528430e174db28f03821d00d635d4b6bf461eb9"
	I1018 08:34:00.912235 1285094 cri.go:89] found id: "60f3656a31e3365232ea3a50fda1e9925de06d7f546e566ed63caa3b6ddc6a26"
	I1018 08:34:00.912238 1285094 cri.go:89] found id: "69416bfe918f8c0355a6e848e7c20cc2a0644f592e4af4ce241f48fa38460e38"
	I1018 08:34:00.912244 1285094 cri.go:89] found id: "28a70a60eb5fbd7784b4dc8b229decc07e1ead52bd5c852f7c9fdb71b87298e6"
	I1018 08:34:00.912252 1285094 cri.go:89] found id: "9b6001d8d045befc23b48c768a68764d88dbf272340c086878656541cf475eae"
	I1018 08:34:00.912256 1285094 cri.go:89] found id: "20dde0b8d894a22ef9350e1d77e8944af9e5892634e205a207e3b8f888c608b1"
	I1018 08:34:00.912259 1285094 cri.go:89] found id: "1a2a1784a32edcacf12954ced37433e7d2a873d907689eb4a650272d4227e914"
	I1018 08:34:00.912265 1285094 cri.go:89] found id: "41255395d4ccf30b4f83f7286e24fddc9877529056e3b6550176417a18c6ec37"
	I1018 08:34:00.912269 1285094 cri.go:89] found id: "df452f4c1f840a0419fcac5bf2f09de27afcb9908345e644113dc8d6933a8ea1"
	I1018 08:34:00.912272 1285094 cri.go:89] found id: "b7afa4a4426cb6b1ac0da51b0491d588ab2af4410fd28370bf8fa39172e5f813"
	I1018 08:34:00.912276 1285094 cri.go:89] found id: "73bba38c93d1dd848863605a0e040b6cc93f6c2ae44da4b521ffd096fc0e7cef"
	I1018 08:34:00.912280 1285094 cri.go:89] found id: "3341cb4941ef2d1b3b62f6350adf989a06c0a11fdca10b1d2a4d0be1a73e4dac"
	I1018 08:34:00.912283 1285094 cri.go:89] found id: "af3c01fc17b1cf65bfc660209278c4a4b5768989098968a12a530d54cf2e3e99"
	I1018 08:34:00.912288 1285094 cri.go:89] found id: "21a4115e68c1dc6a33076f076e3d38d44c1bd958e69b4c3ac7531a57eb042d50"
	I1018 08:34:00.912295 1285094 cri.go:89] found id: "8a8b73b00b16ea37452d2de32d3703276b55fbacd22ec17ae9cc096aec490df7"
	I1018 08:34:00.912299 1285094 cri.go:89] found id: "c13caa4e33e4b6e47382b70365473a8c332ce93c8e5eeb30bd76b72c8a77d787"
	I1018 08:34:00.912302 1285094 cri.go:89] found id: "c8f4c76b52ea3dbdf50b9feeeef868087ebe9f8d3a8596d5c5b2c3f5fe5faad5"
	I1018 08:34:00.912305 1285094 cri.go:89] found id: "d3d6b4b5a780c831e4df944a01c79fc26b915752ab82f96bfa50a8db6eeb10ca"
	I1018 08:34:00.912310 1285094 cri.go:89] found id: "b5b1a3ea57732699826bd43ca767e85f48ee04112206faf1a4a459e8820cf3d8"
	I1018 08:34:00.912314 1285094 cri.go:89] found id: "3d8f28771c74beb97c58d5f8c30b538ff9c212922906d7c5984e69d87dfde8c8"
	I1018 08:34:00.912317 1285094 cri.go:89] found id: "c60014395decc7e3a08e95d661ec9973575293fea6d721129b4f610cd9045de2"
	I1018 08:34:00.912320 1285094 cri.go:89] found id: ""
	I1018 08:34:00.912373 1285094 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 08:34:00.934737 1285094 out.go:203] 
	W1018 08:34:00.937592 1285094 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:34:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:34:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 08:34:00.937622 1285094 out.go:285] * 
	* 
	W1018 08:34:00.946518 1285094 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 08:34:00.950983 1285094 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-718596 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-441731 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-441731 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-xzmmq" [34c3e15d-3754-4e22-b341-024f3dc1c356] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-441731 -n functional-441731
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-18 08:49:49.237793732 +0000 UTC m=+1215.003207908
functional_test.go:1645: (dbg) Run:  kubectl --context functional-441731 describe po hello-node-connect-7d85dfc575-xzmmq -n default
functional_test.go:1645: (dbg) kubectl --context functional-441731 describe po hello-node-connect-7d85dfc575-xzmmq -n default:
Name:             hello-node-connect-7d85dfc575-xzmmq
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-441731/192.168.49.2
Start Time:       Sat, 18 Oct 2025 08:39:48 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kvdlf (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-kvdlf:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-xzmmq to functional-441731
Warning  Failed     7m17s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m17s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m59s (x19 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m47s (x20 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Normal   Pulling    4m32s (x6 over 10m)   kubelet            Pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-441731 logs hello-node-connect-7d85dfc575-xzmmq -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-441731 logs hello-node-connect-7d85dfc575-xzmmq -n default: exit status 1 (98.027063ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-xzmmq" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-441731 logs hello-node-connect-7d85dfc575-xzmmq -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-441731 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-xzmmq
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-441731/192.168.49.2
Start Time:       Sat, 18 Oct 2025 08:39:48 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kvdlf (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-kvdlf:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-xzmmq to functional-441731
Warning  Failed     7m17s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m17s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m59s (x19 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m47s (x20 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Normal   Pulling    4m32s (x6 over 10m)   kubelet            Pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-441731 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-441731 logs -l app=hello-node-connect: exit status 1 (86.552404ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-xzmmq" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-441731 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-441731 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.100.28.216
IPs:                      10.100.28.216
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30674/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-441731
helpers_test.go:243: (dbg) docker inspect functional-441731:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "64a8977aee1bd4043e00638213eea14ee80989c563489a2ef616b60dfb7d3dab",
	        "Created": "2025-10-18T08:37:01.405328674Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1291889,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T08:37:01.483813202Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/64a8977aee1bd4043e00638213eea14ee80989c563489a2ef616b60dfb7d3dab/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/64a8977aee1bd4043e00638213eea14ee80989c563489a2ef616b60dfb7d3dab/hostname",
	        "HostsPath": "/var/lib/docker/containers/64a8977aee1bd4043e00638213eea14ee80989c563489a2ef616b60dfb7d3dab/hosts",
	        "LogPath": "/var/lib/docker/containers/64a8977aee1bd4043e00638213eea14ee80989c563489a2ef616b60dfb7d3dab/64a8977aee1bd4043e00638213eea14ee80989c563489a2ef616b60dfb7d3dab-json.log",
	        "Name": "/functional-441731",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-441731:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-441731",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "64a8977aee1bd4043e00638213eea14ee80989c563489a2ef616b60dfb7d3dab",
	                "LowerDir": "/var/lib/docker/overlay2/3b63a9917fe4c4d397e5ca8dd3dcb2ac1398ce7d3a3fecc8d9edb2c4a7ef5d92-init/diff:/var/lib/docker/overlay2/60519750c2737db0dfdb37adf468f6414129c2b5dcb7218ea59afbc63bfd537f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3b63a9917fe4c4d397e5ca8dd3dcb2ac1398ce7d3a3fecc8d9edb2c4a7ef5d92/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3b63a9917fe4c4d397e5ca8dd3dcb2ac1398ce7d3a3fecc8d9edb2c4a7ef5d92/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3b63a9917fe4c4d397e5ca8dd3dcb2ac1398ce7d3a3fecc8d9edb2c4a7ef5d92/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-441731",
	                "Source": "/var/lib/docker/volumes/functional-441731/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-441731",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-441731",
	                "name.minikube.sigs.k8s.io": "functional-441731",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "959421a0fb543faf872b1c60e695af9d3125a6e4a2135b4de76f5f8cc7599d76",
	            "SandboxKey": "/var/run/docker/netns/959421a0fb54",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34601"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34602"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34605"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34603"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34604"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-441731": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:f4:8b:6d:29:b6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1b4e31a983510c4f5fb30b628457699b884cd6dcfee8f87f07763588d370f98e",
	                    "EndpointID": "c8211d393a3af25c251796bfef0b9220171f86fc793f0201e74b0573045ec826",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-441731",
	                        "64a8977aee1b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-441731 -n functional-441731
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-441731 logs -n 25: (1.538817062s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                   ARGS                                                   │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                         │ minikube          │ jenkins │ v1.37.0 │ 18 Oct 25 08:38 UTC │ 18 Oct 25 08:38 UTC │
	│ cache   │ list                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 18 Oct 25 08:38 UTC │ 18 Oct 25 08:38 UTC │
	│ ssh     │ functional-441731 ssh sudo crictl images                                                                 │ functional-441731 │ jenkins │ v1.37.0 │ 18 Oct 25 08:38 UTC │ 18 Oct 25 08:38 UTC │
	│ ssh     │ functional-441731 ssh sudo crictl rmi registry.k8s.io/pause:latest                                       │ functional-441731 │ jenkins │ v1.37.0 │ 18 Oct 25 08:38 UTC │ 18 Oct 25 08:38 UTC │
	│ ssh     │ functional-441731 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                  │ functional-441731 │ jenkins │ v1.37.0 │ 18 Oct 25 08:38 UTC │                     │
	│ cache   │ functional-441731 cache reload                                                                           │ functional-441731 │ jenkins │ v1.37.0 │ 18 Oct 25 08:38 UTC │ 18 Oct 25 08:38 UTC │
	│ ssh     │ functional-441731 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                  │ functional-441731 │ jenkins │ v1.37.0 │ 18 Oct 25 08:38 UTC │ 18 Oct 25 08:38 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                         │ minikube          │ jenkins │ v1.37.0 │ 18 Oct 25 08:38 UTC │ 18 Oct 25 08:38 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                      │ minikube          │ jenkins │ v1.37.0 │ 18 Oct 25 08:38 UTC │ 18 Oct 25 08:38 UTC │
	│ kubectl │ functional-441731 kubectl -- --context functional-441731 get pods                                        │ functional-441731 │ jenkins │ v1.37.0 │ 18 Oct 25 08:38 UTC │ 18 Oct 25 08:38 UTC │
	│ start   │ -p functional-441731 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all │ functional-441731 │ jenkins │ v1.37.0 │ 18 Oct 25 08:38 UTC │ 18 Oct 25 08:39 UTC │
	│ service │ invalid-svc -p functional-441731                                                                         │ functional-441731 │ jenkins │ v1.37.0 │ 18 Oct 25 08:39 UTC │                     │
	│ config  │ functional-441731 config unset cpus                                                                      │ functional-441731 │ jenkins │ v1.37.0 │ 18 Oct 25 08:39 UTC │ 18 Oct 25 08:39 UTC │
	│ ssh     │ functional-441731 ssh echo hello                                                                         │ functional-441731 │ jenkins │ v1.37.0 │ 18 Oct 25 08:39 UTC │ 18 Oct 25 08:39 UTC │
	│ config  │ functional-441731 config get cpus                                                                        │ functional-441731 │ jenkins │ v1.37.0 │ 18 Oct 25 08:39 UTC │                     │
	│ config  │ functional-441731 config set cpus 2                                                                      │ functional-441731 │ jenkins │ v1.37.0 │ 18 Oct 25 08:39 UTC │ 18 Oct 25 08:39 UTC │
	│ config  │ functional-441731 config get cpus                                                                        │ functional-441731 │ jenkins │ v1.37.0 │ 18 Oct 25 08:39 UTC │ 18 Oct 25 08:39 UTC │
	│ config  │ functional-441731 config unset cpus                                                                      │ functional-441731 │ jenkins │ v1.37.0 │ 18 Oct 25 08:39 UTC │ 18 Oct 25 08:39 UTC │
	│ ssh     │ functional-441731 ssh cat /etc/hostname                                                                  │ functional-441731 │ jenkins │ v1.37.0 │ 18 Oct 25 08:39 UTC │ 18 Oct 25 08:39 UTC │
	│ config  │ functional-441731 config get cpus                                                                        │ functional-441731 │ jenkins │ v1.37.0 │ 18 Oct 25 08:39 UTC │                     │
	│ tunnel  │ functional-441731 tunnel --alsologtostderr                                                               │ functional-441731 │ jenkins │ v1.37.0 │ 18 Oct 25 08:39 UTC │                     │
	│ tunnel  │ functional-441731 tunnel --alsologtostderr                                                               │ functional-441731 │ jenkins │ v1.37.0 │ 18 Oct 25 08:39 UTC │                     │
	│ tunnel  │ functional-441731 tunnel --alsologtostderr                                                               │ functional-441731 │ jenkins │ v1.37.0 │ 18 Oct 25 08:39 UTC │                     │
	│ addons  │ functional-441731 addons list                                                                            │ functional-441731 │ jenkins │ v1.37.0 │ 18 Oct 25 08:39 UTC │ 18 Oct 25 08:39 UTC │
	│ addons  │ functional-441731 addons list -o json                                                                    │ functional-441731 │ jenkins │ v1.37.0 │ 18 Oct 25 08:39 UTC │ 18 Oct 25 08:39 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 08:38:48
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 08:38:48.226266 1296085 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:38:48.226425 1296085 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:38:48.226430 1296085 out.go:374] Setting ErrFile to fd 2...
	I1018 08:38:48.226434 1296085 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:38:48.226687 1296085 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 08:38:48.227041 1296085 out.go:368] Setting JSON to false
	I1018 08:38:48.227998 1296085 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":37276,"bootTime":1760739453,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1018 08:38:48.228055 1296085 start.go:141] virtualization:  
	I1018 08:38:48.231557 1296085 out.go:179] * [functional-441731] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 08:38:48.235265 1296085 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 08:38:48.235373 1296085 notify.go:220] Checking for updates...
	I1018 08:38:48.241143 1296085 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 08:38:48.244194 1296085 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 08:38:48.247107 1296085 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-1274243/.minikube
	I1018 08:38:48.250041 1296085 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 08:38:48.252878 1296085 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 08:38:48.256307 1296085 config.go:182] Loaded profile config "functional-441731": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:38:48.256398 1296085 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 08:38:48.279078 1296085 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 08:38:48.279184 1296085 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 08:38:48.345667 1296085 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-18 08:38:48.33537121 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 08:38:48.345759 1296085 docker.go:318] overlay module found
	I1018 08:38:48.350595 1296085 out.go:179] * Using the docker driver based on existing profile
	I1018 08:38:48.353396 1296085 start.go:305] selected driver: docker
	I1018 08:38:48.353405 1296085 start.go:925] validating driver "docker" against &{Name:functional-441731 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-441731 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 08:38:48.353494 1296085 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 08:38:48.353597 1296085 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 08:38:48.409904 1296085 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-18 08:38:48.400213564 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 08:38:48.410313 1296085 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 08:38:48.410337 1296085 cni.go:84] Creating CNI manager for ""
	I1018 08:38:48.410394 1296085 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 08:38:48.410441 1296085 start.go:349] cluster config:
	{Name:functional-441731 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-441731 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 08:38:48.415414 1296085 out.go:179] * Starting "functional-441731" primary control-plane node in "functional-441731" cluster
	I1018 08:38:48.418295 1296085 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 08:38:48.421056 1296085 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 08:38:48.423969 1296085 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 08:38:48.424022 1296085 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 08:38:48.424031 1296085 cache.go:58] Caching tarball of preloaded images
	I1018 08:38:48.424040 1296085 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 08:38:48.424121 1296085 preload.go:233] Found /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 08:38:48.424129 1296085 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 08:38:48.424233 1296085 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/functional-441731/config.json ...
	I1018 08:38:48.442284 1296085 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 08:38:48.442296 1296085 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 08:38:48.442317 1296085 cache.go:232] Successfully downloaded all kic artifacts
	I1018 08:38:48.442339 1296085 start.go:360] acquireMachinesLock for functional-441731: {Name:mk565531198f9cdfdd8de2140611b2213191f53f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 08:38:48.442403 1296085 start.go:364] duration metric: took 47.974µs to acquireMachinesLock for "functional-441731"
	I1018 08:38:48.442422 1296085 start.go:96] Skipping create...Using existing machine configuration
	I1018 08:38:48.442426 1296085 fix.go:54] fixHost starting: 
	I1018 08:38:48.442683 1296085 cli_runner.go:164] Run: docker container inspect functional-441731 --format={{.State.Status}}
	I1018 08:38:48.458915 1296085 fix.go:112] recreateIfNeeded on functional-441731: state=Running err=<nil>
	W1018 08:38:48.458934 1296085 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 08:38:48.462126 1296085 out.go:252] * Updating the running docker "functional-441731" container ...
	I1018 08:38:48.462149 1296085 machine.go:93] provisionDockerMachine start ...
	I1018 08:38:48.462241 1296085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-441731
	I1018 08:38:48.481726 1296085 main.go:141] libmachine: Using SSH client type: native
	I1018 08:38:48.482036 1296085 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34601 <nil> <nil>}
	I1018 08:38:48.482043 1296085 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 08:38:48.628494 1296085 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-441731
	
	I1018 08:38:48.628515 1296085 ubuntu.go:182] provisioning hostname "functional-441731"
	I1018 08:38:48.628574 1296085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-441731
	I1018 08:38:48.645930 1296085 main.go:141] libmachine: Using SSH client type: native
	I1018 08:38:48.646223 1296085 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34601 <nil> <nil>}
	I1018 08:38:48.646232 1296085 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-441731 && echo "functional-441731" | sudo tee /etc/hostname
	I1018 08:38:48.804756 1296085 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-441731
	
	I1018 08:38:48.804836 1296085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-441731
	I1018 08:38:48.823706 1296085 main.go:141] libmachine: Using SSH client type: native
	I1018 08:38:48.824030 1296085 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34601 <nil> <nil>}
	I1018 08:38:48.824044 1296085 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-441731' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-441731/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-441731' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 08:38:48.972093 1296085 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 08:38:48.972109 1296085 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-1274243/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-1274243/.minikube}
	I1018 08:38:48.972144 1296085 ubuntu.go:190] setting up certificates
	I1018 08:38:48.972153 1296085 provision.go:84] configureAuth start
	I1018 08:38:48.972209 1296085 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-441731
	I1018 08:38:48.990821 1296085 provision.go:143] copyHostCerts
	I1018 08:38:48.990878 1296085 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem, removing ...
	I1018 08:38:48.990894 1296085 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem
	I1018 08:38:48.990969 1296085 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem (1078 bytes)
	I1018 08:38:48.991073 1296085 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem, removing ...
	I1018 08:38:48.991077 1296085 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem
	I1018 08:38:48.991102 1296085 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem (1123 bytes)
	I1018 08:38:48.991159 1296085 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem, removing ...
	I1018 08:38:48.991162 1296085 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem
	I1018 08:38:48.991191 1296085 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem (1675 bytes)
	I1018 08:38:48.991243 1296085 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem org=jenkins.functional-441731 san=[127.0.0.1 192.168.49.2 functional-441731 localhost minikube]
	I1018 08:38:49.119735 1296085 provision.go:177] copyRemoteCerts
	I1018 08:38:49.119800 1296085 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 08:38:49.119860 1296085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-441731
	I1018 08:38:49.136737 1296085 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34601 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/functional-441731/id_rsa Username:docker}
	I1018 08:38:49.239412 1296085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 08:38:49.256724 1296085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 08:38:49.275119 1296085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 08:38:49.292752 1296085 provision.go:87] duration metric: took 320.586777ms to configureAuth
	I1018 08:38:49.292769 1296085 ubuntu.go:206] setting minikube options for container-runtime
	I1018 08:38:49.292994 1296085 config.go:182] Loaded profile config "functional-441731": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:38:49.293099 1296085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-441731
	I1018 08:38:49.310232 1296085 main.go:141] libmachine: Using SSH client type: native
	I1018 08:38:49.310521 1296085 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34601 <nil> <nil>}
	I1018 08:38:49.310533 1296085 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 08:38:54.695417 1296085 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 08:38:54.695430 1296085 machine.go:96] duration metric: took 6.233274995s to provisionDockerMachine
	I1018 08:38:54.695448 1296085 start.go:293] postStartSetup for "functional-441731" (driver="docker")
	I1018 08:38:54.695458 1296085 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 08:38:54.695517 1296085 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 08:38:54.695566 1296085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-441731
	I1018 08:38:54.712614 1296085 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34601 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/functional-441731/id_rsa Username:docker}
	I1018 08:38:54.815663 1296085 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 08:38:54.818880 1296085 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 08:38:54.818897 1296085 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 08:38:54.818907 1296085 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-1274243/.minikube/addons for local assets ...
	I1018 08:38:54.818958 1296085 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-1274243/.minikube/files for local assets ...
	I1018 08:38:54.819043 1296085 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem -> 12760972.pem in /etc/ssl/certs
	I1018 08:38:54.819115 1296085 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/test/nested/copy/1276097/hosts -> hosts in /etc/test/nested/copy/1276097
	I1018 08:38:54.819156 1296085 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1276097
	I1018 08:38:54.826510 1296085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem --> /etc/ssl/certs/12760972.pem (1708 bytes)
	I1018 08:38:54.843769 1296085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/test/nested/copy/1276097/hosts --> /etc/test/nested/copy/1276097/hosts (40 bytes)
	I1018 08:38:54.861874 1296085 start.go:296] duration metric: took 166.410822ms for postStartSetup
	I1018 08:38:54.861943 1296085 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 08:38:54.861999 1296085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-441731
	I1018 08:38:54.879166 1296085 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34601 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/functional-441731/id_rsa Username:docker}
	I1018 08:38:54.981429 1296085 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 08:38:54.986438 1296085 fix.go:56] duration metric: took 6.544004179s for fixHost
	I1018 08:38:54.986452 1296085 start.go:83] releasing machines lock for "functional-441731", held for 6.544042283s
	I1018 08:38:54.986520 1296085 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-441731
	I1018 08:38:55.008111 1296085 ssh_runner.go:195] Run: cat /version.json
	I1018 08:38:55.008163 1296085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-441731
	I1018 08:38:55.008499 1296085 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 08:38:55.008565 1296085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-441731
	I1018 08:38:55.032197 1296085 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34601 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/functional-441731/id_rsa Username:docker}
	I1018 08:38:55.034573 1296085 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34601 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/functional-441731/id_rsa Username:docker}
	I1018 08:38:55.226122 1296085 ssh_runner.go:195] Run: systemctl --version
	I1018 08:38:55.232659 1296085 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 08:38:55.271074 1296085 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 08:38:55.275722 1296085 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 08:38:55.275783 1296085 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 08:38:55.283288 1296085 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 08:38:55.283301 1296085 start.go:495] detecting cgroup driver to use...
	I1018 08:38:55.283330 1296085 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 08:38:55.283375 1296085 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 08:38:55.298317 1296085 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 08:38:55.311003 1296085 docker.go:218] disabling cri-docker service (if available) ...
	I1018 08:38:55.311065 1296085 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 08:38:55.326602 1296085 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 08:38:55.339784 1296085 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 08:38:55.480839 1296085 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 08:38:55.614935 1296085 docker.go:234] disabling docker service ...
	I1018 08:38:55.614990 1296085 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 08:38:55.631048 1296085 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 08:38:55.643895 1296085 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 08:38:55.791732 1296085 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 08:38:55.941588 1296085 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 08:38:55.954543 1296085 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 08:38:55.969555 1296085 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 08:38:55.969633 1296085 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:38:55.978498 1296085 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 08:38:55.978568 1296085 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:38:55.987280 1296085 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:38:55.995696 1296085 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:38:56.008502 1296085 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 08:38:56.017047 1296085 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:38:56.026782 1296085 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:38:56.036009 1296085 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:38:56.045382 1296085 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 08:38:56.053765 1296085 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 08:38:56.061613 1296085 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 08:38:56.204571 1296085 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 08:39:03.849066 1296085 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.64447176s)
	I1018 08:39:03.849081 1296085 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 08:39:03.849145 1296085 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 08:39:03.852911 1296085 start.go:563] Will wait 60s for crictl version
	I1018 08:39:03.852962 1296085 ssh_runner.go:195] Run: which crictl
	I1018 08:39:03.856390 1296085 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 08:39:03.885660 1296085 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 08:39:03.885749 1296085 ssh_runner.go:195] Run: crio --version
	I1018 08:39:03.914096 1296085 ssh_runner.go:195] Run: crio --version
	I1018 08:39:03.946659 1296085 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 08:39:03.949663 1296085 cli_runner.go:164] Run: docker network inspect functional-441731 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 08:39:03.965563 1296085 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 08:39:03.972681 1296085 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1018 08:39:03.975622 1296085 kubeadm.go:883] updating cluster {Name:functional-441731 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-441731 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 08:39:03.975743 1296085 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 08:39:03.975825 1296085 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 08:39:04.009706 1296085 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 08:39:04.009718 1296085 crio.go:433] Images already preloaded, skipping extraction
	I1018 08:39:04.009780 1296085 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 08:39:04.037806 1296085 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 08:39:04.037822 1296085 cache_images.go:85] Images are preloaded, skipping loading
	I1018 08:39:04.037828 1296085 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1018 08:39:04.037925 1296085 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-441731 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-441731 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 08:39:04.038005 1296085 ssh_runner.go:195] Run: crio config
	I1018 08:39:04.112110 1296085 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1018 08:39:04.112130 1296085 cni.go:84] Creating CNI manager for ""
	I1018 08:39:04.112139 1296085 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 08:39:04.112152 1296085 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 08:39:04.112174 1296085 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-441731 NodeName:functional-441731 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 08:39:04.112296 1296085 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-441731"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 08:39:04.112362 1296085 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 08:39:04.120182 1296085 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 08:39:04.120250 1296085 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 08:39:04.128049 1296085 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1018 08:39:04.140651 1296085 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 08:39:04.153340 1296085 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1018 08:39:04.165598 1296085 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1018 08:39:04.169470 1296085 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 08:39:04.313450 1296085 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 08:39:04.327191 1296085 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/functional-441731 for IP: 192.168.49.2
	I1018 08:39:04.327204 1296085 certs.go:195] generating shared ca certs ...
	I1018 08:39:04.327218 1296085 certs.go:227] acquiring lock for ca certs: {Name:mk38b7543cc5a8f0209b20a590824c94ad8d0609 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:39:04.327375 1296085 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.key
	I1018 08:39:04.327430 1296085 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.key
	I1018 08:39:04.327436 1296085 certs.go:257] generating profile certs ...
	I1018 08:39:04.327551 1296085 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/functional-441731/client.key
	I1018 08:39:04.327622 1296085 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/functional-441731/apiserver.key.14dc85a7
	I1018 08:39:04.327671 1296085 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/functional-441731/proxy-client.key
	I1018 08:39:04.327820 1296085 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097.pem (1338 bytes)
	W1018 08:39:04.327880 1296085 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097_empty.pem, impossibly tiny 0 bytes
	I1018 08:39:04.327886 1296085 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 08:39:04.327908 1296085 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem (1078 bytes)
	I1018 08:39:04.327932 1296085 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem (1123 bytes)
	I1018 08:39:04.327967 1296085 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem (1675 bytes)
	I1018 08:39:04.328012 1296085 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem (1708 bytes)
	I1018 08:39:04.328700 1296085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 08:39:04.346593 1296085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 08:39:04.363258 1296085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 08:39:04.380782 1296085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 08:39:04.398459 1296085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/functional-441731/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 08:39:04.416900 1296085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/functional-441731/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 08:39:04.435277 1296085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/functional-441731/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 08:39:04.453148 1296085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/functional-441731/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 08:39:04.469944 1296085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 08:39:04.491391 1296085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097.pem --> /usr/share/ca-certificates/1276097.pem (1338 bytes)
	I1018 08:39:04.508443 1296085 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem --> /usr/share/ca-certificates/12760972.pem (1708 bytes)
	I1018 08:39:04.525394 1296085 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 08:39:04.538371 1296085 ssh_runner.go:195] Run: openssl version
	I1018 08:39:04.544694 1296085 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 08:39:04.553061 1296085 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 08:39:04.556748 1296085 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1018 08:39:04.556820 1296085 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 08:39:04.598083 1296085 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 08:39:04.605783 1296085 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1276097.pem && ln -fs /usr/share/ca-certificates/1276097.pem /etc/ssl/certs/1276097.pem"
	I1018 08:39:04.613870 1296085 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1276097.pem
	I1018 08:39:04.617402 1296085 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 08:36 /usr/share/ca-certificates/1276097.pem
	I1018 08:39:04.617455 1296085 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1276097.pem
	I1018 08:39:04.659052 1296085 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1276097.pem /etc/ssl/certs/51391683.0"
	I1018 08:39:04.666662 1296085 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12760972.pem && ln -fs /usr/share/ca-certificates/12760972.pem /etc/ssl/certs/12760972.pem"
	I1018 08:39:04.675656 1296085 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12760972.pem
	I1018 08:39:04.679199 1296085 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 08:36 /usr/share/ca-certificates/12760972.pem
	I1018 08:39:04.679251 1296085 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12760972.pem
	I1018 08:39:04.720048 1296085 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12760972.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 08:39:04.727648 1296085 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 08:39:04.732031 1296085 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 08:39:04.772715 1296085 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 08:39:04.815006 1296085 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 08:39:04.856726 1296085 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 08:39:04.897176 1296085 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 08:39:04.939301 1296085 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 08:39:04.979763 1296085 kubeadm.go:400] StartCluster: {Name:functional-441731 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-441731 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 08:39:04.979859 1296085 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 08:39:04.979926 1296085 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 08:39:05.011686 1296085 cri.go:89] found id: "48c052109e0b9c2b9542035d09096c063eb8b8b454b0208f8870d3d5c41b361b"
	I1018 08:39:05.011700 1296085 cri.go:89] found id: "7d9edf58b71e842e36e16ed341ca211e984ca83cc78a661152e464fd19a8fb5f"
	I1018 08:39:05.011703 1296085 cri.go:89] found id: "e677a07849b37cbafbd64fa560abd21dfc2f152d4c0f1081157b00f76cde159d"
	I1018 08:39:05.011705 1296085 cri.go:89] found id: "e6f56a540e99f8a69365623157da0e92947dfefbf1dbd256d40d2c5443cdfec4"
	I1018 08:39:05.011707 1296085 cri.go:89] found id: "a07b2be9bc89cd408028d50b7c24caa38f7da75371a87c156e4efa8cc15149cd"
	I1018 08:39:05.011710 1296085 cri.go:89] found id: "f4afb80863ad723fedb127abbe172bcf1a699592862dd1d89debcac2cfa38fbd"
	I1018 08:39:05.011712 1296085 cri.go:89] found id: "322358d163341ec4f79b44857f7355dd9e952a81b23d22c7a23313f1dcb2e20b"
	I1018 08:39:05.011714 1296085 cri.go:89] found id: "7a21d8fd024b315f1907c0a46edc9d7a5e4d78182e33ce5afc780248b97e80c4"
	I1018 08:39:05.011716 1296085 cri.go:89] found id: "1556ec1257e5b961bddfc668f174c503c9d2908d7fc0d6a0c3148c745e3d4574"
	I1018 08:39:05.011724 1296085 cri.go:89] found id: "f05e41e28c6992fce639a95a5e0a268828e4f956d2687945667a42c3be707231"
	I1018 08:39:05.011738 1296085 cri.go:89] found id: "d8969249a514c10b46d12a10639835c0fe1b56dd451c49895c8cf31d376c922e"
	I1018 08:39:05.011741 1296085 cri.go:89] found id: "5d76a29f5109a7e9615719c4259e4c182369f1426433725653ce102bfc021f68"
	I1018 08:39:05.011743 1296085 cri.go:89] found id: "48bf0a17c69ade4e894ee58eb5b18232e2bdaba18507c9317e8ca29b5f2ad3b0"
	I1018 08:39:05.011746 1296085 cri.go:89] found id: "e7a9f836f0dc174c9d90a22b80ee59d76f52fcdb02ea78e49377fca5c5deb885"
	I1018 08:39:05.011748 1296085 cri.go:89] found id: "c83ebd352862317256bf7b4dd7baf073a9646c95c85ba50a6fa4a8992fd3a73e"
	I1018 08:39:05.011752 1296085 cri.go:89] found id: "6995be8dd2d3deb9e1a371a5cc409e56e2fe9c6dc3130856b58c6c68db0b1233"
	I1018 08:39:05.011754 1296085 cri.go:89] found id: "b9de15dd9669e5e831313351d36780a040d8afe53bad07b1ab5f2568a47a6a9d"
	I1018 08:39:05.011758 1296085 cri.go:89] found id: "7b9378810635c04689f53bc92f37382e1a74a0e6f5763df25a443f86409d2071"
	I1018 08:39:05.011760 1296085 cri.go:89] found id: ""
	I1018 08:39:05.011817 1296085 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 08:39:05.023778 1296085 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:39:05Z" level=error msg="open /run/runc: no such file or directory"
	I1018 08:39:05.023894 1296085 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 08:39:05.032004 1296085 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 08:39:05.032013 1296085 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 08:39:05.032069 1296085 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 08:39:05.039738 1296085 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 08:39:05.040286 1296085 kubeconfig.go:125] found "functional-441731" server: "https://192.168.49.2:8441"
	I1018 08:39:05.041542 1296085 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 08:39:05.049266 1296085 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-18 08:37:09.052666325 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-18 08:39:04.161574971 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1018 08:39:05.049277 1296085 kubeadm.go:1160] stopping kube-system containers ...
	I1018 08:39:05.049289 1296085 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1018 08:39:05.049350 1296085 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 08:39:05.081453 1296085 cri.go:89] found id: "48c052109e0b9c2b9542035d09096c063eb8b8b454b0208f8870d3d5c41b361b"
	I1018 08:39:05.081465 1296085 cri.go:89] found id: "7d9edf58b71e842e36e16ed341ca211e984ca83cc78a661152e464fd19a8fb5f"
	I1018 08:39:05.081468 1296085 cri.go:89] found id: "e677a07849b37cbafbd64fa560abd21dfc2f152d4c0f1081157b00f76cde159d"
	I1018 08:39:05.081471 1296085 cri.go:89] found id: "e6f56a540e99f8a69365623157da0e92947dfefbf1dbd256d40d2c5443cdfec4"
	I1018 08:39:05.081484 1296085 cri.go:89] found id: "a07b2be9bc89cd408028d50b7c24caa38f7da75371a87c156e4efa8cc15149cd"
	I1018 08:39:05.081488 1296085 cri.go:89] found id: "f4afb80863ad723fedb127abbe172bcf1a699592862dd1d89debcac2cfa38fbd"
	I1018 08:39:05.081490 1296085 cri.go:89] found id: "322358d163341ec4f79b44857f7355dd9e952a81b23d22c7a23313f1dcb2e20b"
	I1018 08:39:05.081492 1296085 cri.go:89] found id: "7a21d8fd024b315f1907c0a46edc9d7a5e4d78182e33ce5afc780248b97e80c4"
	I1018 08:39:05.081496 1296085 cri.go:89] found id: "1556ec1257e5b961bddfc668f174c503c9d2908d7fc0d6a0c3148c745e3d4574"
	I1018 08:39:05.081501 1296085 cri.go:89] found id: "f05e41e28c6992fce639a95a5e0a268828e4f956d2687945667a42c3be707231"
	I1018 08:39:05.081503 1296085 cri.go:89] found id: "d8969249a514c10b46d12a10639835c0fe1b56dd451c49895c8cf31d376c922e"
	I1018 08:39:05.081505 1296085 cri.go:89] found id: "5d76a29f5109a7e9615719c4259e4c182369f1426433725653ce102bfc021f68"
	I1018 08:39:05.081507 1296085 cri.go:89] found id: "48bf0a17c69ade4e894ee58eb5b18232e2bdaba18507c9317e8ca29b5f2ad3b0"
	I1018 08:39:05.081509 1296085 cri.go:89] found id: "e7a9f836f0dc174c9d90a22b80ee59d76f52fcdb02ea78e49377fca5c5deb885"
	I1018 08:39:05.081511 1296085 cri.go:89] found id: "c83ebd352862317256bf7b4dd7baf073a9646c95c85ba50a6fa4a8992fd3a73e"
	I1018 08:39:05.081516 1296085 cri.go:89] found id: "6995be8dd2d3deb9e1a371a5cc409e56e2fe9c6dc3130856b58c6c68db0b1233"
	I1018 08:39:05.081518 1296085 cri.go:89] found id: "b9de15dd9669e5e831313351d36780a040d8afe53bad07b1ab5f2568a47a6a9d"
	I1018 08:39:05.081523 1296085 cri.go:89] found id: "7b9378810635c04689f53bc92f37382e1a74a0e6f5763df25a443f86409d2071"
	I1018 08:39:05.081525 1296085 cri.go:89] found id: ""
	I1018 08:39:05.081529 1296085 cri.go:252] Stopping containers: [48c052109e0b9c2b9542035d09096c063eb8b8b454b0208f8870d3d5c41b361b 7d9edf58b71e842e36e16ed341ca211e984ca83cc78a661152e464fd19a8fb5f e677a07849b37cbafbd64fa560abd21dfc2f152d4c0f1081157b00f76cde159d e6f56a540e99f8a69365623157da0e92947dfefbf1dbd256d40d2c5443cdfec4 a07b2be9bc89cd408028d50b7c24caa38f7da75371a87c156e4efa8cc15149cd f4afb80863ad723fedb127abbe172bcf1a699592862dd1d89debcac2cfa38fbd 322358d163341ec4f79b44857f7355dd9e952a81b23d22c7a23313f1dcb2e20b 7a21d8fd024b315f1907c0a46edc9d7a5e4d78182e33ce5afc780248b97e80c4 1556ec1257e5b961bddfc668f174c503c9d2908d7fc0d6a0c3148c745e3d4574 f05e41e28c6992fce639a95a5e0a268828e4f956d2687945667a42c3be707231 d8969249a514c10b46d12a10639835c0fe1b56dd451c49895c8cf31d376c922e 5d76a29f5109a7e9615719c4259e4c182369f1426433725653ce102bfc021f68 48bf0a17c69ade4e894ee58eb5b18232e2bdaba18507c9317e8ca29b5f2ad3b0 e7a9f836f0dc174c9d90a22b80ee59d76f52fcdb02ea78e49377fca5c5deb885 c83ebd352862317256bf7b4dd7baf073a9646c95c
85ba50a6fa4a8992fd3a73e 6995be8dd2d3deb9e1a371a5cc409e56e2fe9c6dc3130856b58c6c68db0b1233 b9de15dd9669e5e831313351d36780a040d8afe53bad07b1ab5f2568a47a6a9d 7b9378810635c04689f53bc92f37382e1a74a0e6f5763df25a443f86409d2071]
	I1018 08:39:05.081589 1296085 ssh_runner.go:195] Run: which crictl
	I1018 08:39:05.085601 1296085 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 48c052109e0b9c2b9542035d09096c063eb8b8b454b0208f8870d3d5c41b361b 7d9edf58b71e842e36e16ed341ca211e984ca83cc78a661152e464fd19a8fb5f e677a07849b37cbafbd64fa560abd21dfc2f152d4c0f1081157b00f76cde159d e6f56a540e99f8a69365623157da0e92947dfefbf1dbd256d40d2c5443cdfec4 a07b2be9bc89cd408028d50b7c24caa38f7da75371a87c156e4efa8cc15149cd f4afb80863ad723fedb127abbe172bcf1a699592862dd1d89debcac2cfa38fbd 322358d163341ec4f79b44857f7355dd9e952a81b23d22c7a23313f1dcb2e20b 7a21d8fd024b315f1907c0a46edc9d7a5e4d78182e33ce5afc780248b97e80c4 1556ec1257e5b961bddfc668f174c503c9d2908d7fc0d6a0c3148c745e3d4574 f05e41e28c6992fce639a95a5e0a268828e4f956d2687945667a42c3be707231 d8969249a514c10b46d12a10639835c0fe1b56dd451c49895c8cf31d376c922e 5d76a29f5109a7e9615719c4259e4c182369f1426433725653ce102bfc021f68 48bf0a17c69ade4e894ee58eb5b18232e2bdaba18507c9317e8ca29b5f2ad3b0 e7a9f836f0dc174c9d90a22b80ee59d76f52fcdb02ea78e49377fca5c5deb885 c83ebd
352862317256bf7b4dd7baf073a9646c95c85ba50a6fa4a8992fd3a73e 6995be8dd2d3deb9e1a371a5cc409e56e2fe9c6dc3130856b58c6c68db0b1233 b9de15dd9669e5e831313351d36780a040d8afe53bad07b1ab5f2568a47a6a9d 7b9378810635c04689f53bc92f37382e1a74a0e6f5763df25a443f86409d2071
	I1018 08:39:05.202266 1296085 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1018 08:39:05.322842 1296085 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 08:39:05.330827 1296085 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5635 Oct 18 08:37 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Oct 18 08:37 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Oct 18 08:37 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Oct 18 08:37 /etc/kubernetes/scheduler.conf
	
	I1018 08:39:05.330886 1296085 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1018 08:39:05.338802 1296085 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1018 08:39:05.346240 1296085 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1018 08:39:05.346300 1296085 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 08:39:05.353609 1296085 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1018 08:39:05.361214 1296085 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1018 08:39:05.361268 1296085 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 08:39:05.368793 1296085 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1018 08:39:05.376108 1296085 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1018 08:39:05.376164 1296085 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 08:39:05.383664 1296085 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 08:39:05.391190 1296085 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1018 08:39:05.439585 1296085 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1018 08:39:09.138167 1296085 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (3.698556851s)
	I1018 08:39:09.138240 1296085 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1018 08:39:09.367894 1296085 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1018 08:39:09.428304 1296085 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1018 08:39:09.491529 1296085 api_server.go:52] waiting for apiserver process to appear ...
	I1018 08:39:09.491610 1296085 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 08:39:09.991668 1296085 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 08:39:10.492690 1296085 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 08:39:10.509566 1296085 api_server.go:72] duration metric: took 1.018046246s to wait for apiserver process to appear ...
	I1018 08:39:10.509579 1296085 api_server.go:88] waiting for apiserver healthz status ...
	I1018 08:39:10.509608 1296085 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1018 08:39:13.632379 1296085 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1018 08:39:13.632396 1296085 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1018 08:39:13.632408 1296085 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1018 08:39:13.705556 1296085 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1018 08:39:13.705571 1296085 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1018 08:39:14.009709 1296085 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1018 08:39:14.021815 1296085 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 08:39:14.021834 1296085 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 08:39:14.510472 1296085 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1018 08:39:14.522482 1296085 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 08:39:14.522513 1296085 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 08:39:15.009949 1296085 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1018 08:39:15.026786 1296085 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 08:39:15.026811 1296085 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 08:39:15.510667 1296085 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1018 08:39:15.519225 1296085 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1018 08:39:15.533903 1296085 api_server.go:141] control plane version: v1.34.1
	I1018 08:39:15.533925 1296085 api_server.go:131] duration metric: took 5.024339333s to wait for apiserver health ...
	I1018 08:39:15.533933 1296085 cni.go:84] Creating CNI manager for ""
	I1018 08:39:15.533939 1296085 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 08:39:15.537194 1296085 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 08:39:15.540199 1296085 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 08:39:15.544505 1296085 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 08:39:15.544515 1296085 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 08:39:15.561781 1296085 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 08:39:16.069808 1296085 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 08:39:16.073708 1296085 system_pods.go:59] 9 kube-system pods found
	I1018 08:39:16.073738 1296085 system_pods.go:61] "coredns-66bc5c9577-vckhd" [e5a7974c-1d5c-4150-b53b-4313294e2f54] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 08:39:16.073746 1296085 system_pods.go:61] "coredns-66bc5c9577-w4kng" [041b4bad-ce3d-41d0-8bd8-e53f15619ab7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 08:39:16.073755 1296085 system_pods.go:61] "etcd-functional-441731" [cadda29c-4527-40a3-ad27-477c6696e2b8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 08:39:16.073760 1296085 system_pods.go:61] "kindnet-54kcr" [7d053ff8-95e6-4954-8894-140ea137c988] Running
	I1018 08:39:16.073767 1296085 system_pods.go:61] "kube-apiserver-functional-441731" [2a368377-1e4e-4501-a7aa-d202f0f396da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 08:39:16.073773 1296085 system_pods.go:61] "kube-controller-manager-functional-441731" [24e599f4-e3ac-439c-b6a1-eee9a765a054] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 08:39:16.073778 1296085 system_pods.go:61] "kube-proxy-lllgl" [08181884-a54c-41e4-8839-4f97c3a3cf12] Running
	I1018 08:39:16.073784 1296085 system_pods.go:61] "kube-scheduler-functional-441731" [0bc4d241-7cff-4453-b6c0-09d2a7e91c0b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 08:39:16.073787 1296085 system_pods.go:61] "storage-provisioner" [8d98d636-38fa-4717-9ac3-f3698ce74d31] Running
	I1018 08:39:16.073792 1296085 system_pods.go:74] duration metric: took 3.974473ms to wait for pod list to return data ...
	I1018 08:39:16.073798 1296085 node_conditions.go:102] verifying NodePressure condition ...
	I1018 08:39:16.076268 1296085 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 08:39:16.076286 1296085 node_conditions.go:123] node cpu capacity is 2
	I1018 08:39:16.076296 1296085 node_conditions.go:105] duration metric: took 2.494529ms to run NodePressure ...
	I1018 08:39:16.076354 1296085 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1018 08:39:16.330538 1296085 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1018 08:39:16.333587 1296085 kubeadm.go:743] kubelet initialised
	I1018 08:39:16.333598 1296085 kubeadm.go:744] duration metric: took 3.047318ms waiting for restarted kubelet to initialise ...
	I1018 08:39:16.333611 1296085 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 08:39:16.342832 1296085 ops.go:34] apiserver oom_adj: -16
	I1018 08:39:16.342844 1296085 kubeadm.go:601] duration metric: took 11.310825348s to restartPrimaryControlPlane
	I1018 08:39:16.342850 1296085 kubeadm.go:402] duration metric: took 11.363099645s to StartCluster
	I1018 08:39:16.342867 1296085 settings.go:142] acquiring lock: {Name:mk4240955247baece9b9f2d9a5a8e189cb089184 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:39:16.342965 1296085 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 08:39:16.343578 1296085 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/kubeconfig: {Name:mk4be33efdd3c492a070116d2c0b6f8b234870aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:39:16.343780 1296085 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 08:39:16.344056 1296085 config.go:182] Loaded profile config "functional-441731": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:39:16.344094 1296085 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 08:39:16.344210 1296085 addons.go:69] Setting storage-provisioner=true in profile "functional-441731"
	I1018 08:39:16.344225 1296085 addons.go:238] Setting addon storage-provisioner=true in "functional-441731"
	W1018 08:39:16.344230 1296085 addons.go:247] addon storage-provisioner should already be in state true
	I1018 08:39:16.344231 1296085 addons.go:69] Setting default-storageclass=true in profile "functional-441731"
	I1018 08:39:16.344244 1296085 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-441731"
	I1018 08:39:16.344250 1296085 host.go:66] Checking if "functional-441731" exists ...
	I1018 08:39:16.344558 1296085 cli_runner.go:164] Run: docker container inspect functional-441731 --format={{.State.Status}}
	I1018 08:39:16.344711 1296085 cli_runner.go:164] Run: docker container inspect functional-441731 --format={{.State.Status}}
	I1018 08:39:16.348014 1296085 out.go:179] * Verifying Kubernetes components...
	I1018 08:39:16.354210 1296085 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 08:39:16.370095 1296085 addons.go:238] Setting addon default-storageclass=true in "functional-441731"
	W1018 08:39:16.370106 1296085 addons.go:247] addon default-storageclass should already be in state true
	I1018 08:39:16.370130 1296085 host.go:66] Checking if "functional-441731" exists ...
	I1018 08:39:16.370542 1296085 cli_runner.go:164] Run: docker container inspect functional-441731 --format={{.State.Status}}
	I1018 08:39:16.380347 1296085 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 08:39:16.383817 1296085 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 08:39:16.383828 1296085 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 08:39:16.383909 1296085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-441731
	I1018 08:39:16.392599 1296085 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 08:39:16.392612 1296085 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 08:39:16.392675 1296085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-441731
	I1018 08:39:16.428724 1296085 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34601 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/functional-441731/id_rsa Username:docker}
	I1018 08:39:16.433495 1296085 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34601 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/functional-441731/id_rsa Username:docker}
	I1018 08:39:16.564779 1296085 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 08:39:16.596138 1296085 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 08:39:16.622240 1296085 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 08:39:17.400907 1296085 node_ready.go:35] waiting up to 6m0s for node "functional-441731" to be "Ready" ...
	I1018 08:39:17.404867 1296085 node_ready.go:49] node "functional-441731" is "Ready"
	I1018 08:39:17.404882 1296085 node_ready.go:38] duration metric: took 3.943064ms for node "functional-441731" to be "Ready" ...
	I1018 08:39:17.404893 1296085 api_server.go:52] waiting for apiserver process to appear ...
	I1018 08:39:17.404956 1296085 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 08:39:17.421350 1296085 api_server.go:72] duration metric: took 1.077545553s to wait for apiserver process to appear ...
	I1018 08:39:17.421362 1296085 api_server.go:88] waiting for apiserver healthz status ...
	I1018 08:39:17.421378 1296085 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1018 08:39:17.422153 1296085 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1018 08:39:17.425079 1296085 addons.go:514] duration metric: took 1.080981662s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 08:39:17.431668 1296085 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1018 08:39:17.432626 1296085 api_server.go:141] control plane version: v1.34.1
	I1018 08:39:17.432639 1296085 api_server.go:131] duration metric: took 11.270719ms to wait for apiserver health ...
	I1018 08:39:17.432646 1296085 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 08:39:17.436205 1296085 system_pods.go:59] 9 kube-system pods found
	I1018 08:39:17.436223 1296085 system_pods.go:61] "coredns-66bc5c9577-vckhd" [e5a7974c-1d5c-4150-b53b-4313294e2f54] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 08:39:17.436230 1296085 system_pods.go:61] "coredns-66bc5c9577-w4kng" [041b4bad-ce3d-41d0-8bd8-e53f15619ab7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 08:39:17.436236 1296085 system_pods.go:61] "etcd-functional-441731" [cadda29c-4527-40a3-ad27-477c6696e2b8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 08:39:17.436240 1296085 system_pods.go:61] "kindnet-54kcr" [7d053ff8-95e6-4954-8894-140ea137c988] Running
	I1018 08:39:17.436248 1296085 system_pods.go:61] "kube-apiserver-functional-441731" [2a368377-1e4e-4501-a7aa-d202f0f396da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 08:39:17.436253 1296085 system_pods.go:61] "kube-controller-manager-functional-441731" [24e599f4-e3ac-439c-b6a1-eee9a765a054] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 08:39:17.436256 1296085 system_pods.go:61] "kube-proxy-lllgl" [08181884-a54c-41e4-8839-4f97c3a3cf12] Running
	I1018 08:39:17.436261 1296085 system_pods.go:61] "kube-scheduler-functional-441731" [0bc4d241-7cff-4453-b6c0-09d2a7e91c0b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 08:39:17.436264 1296085 system_pods.go:61] "storage-provisioner" [8d98d636-38fa-4717-9ac3-f3698ce74d31] Running
	I1018 08:39:17.436269 1296085 system_pods.go:74] duration metric: took 3.618552ms to wait for pod list to return data ...
	I1018 08:39:17.436276 1296085 default_sa.go:34] waiting for default service account to be created ...
	I1018 08:39:17.438566 1296085 default_sa.go:45] found service account: "default"
	I1018 08:39:17.438577 1296085 default_sa.go:55] duration metric: took 2.297005ms for default service account to be created ...
	I1018 08:39:17.438584 1296085 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 08:39:17.441941 1296085 system_pods.go:86] 9 kube-system pods found
	I1018 08:39:17.441958 1296085 system_pods.go:89] "coredns-66bc5c9577-vckhd" [e5a7974c-1d5c-4150-b53b-4313294e2f54] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 08:39:17.441967 1296085 system_pods.go:89] "coredns-66bc5c9577-w4kng" [041b4bad-ce3d-41d0-8bd8-e53f15619ab7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 08:39:17.441976 1296085 system_pods.go:89] "etcd-functional-441731" [cadda29c-4527-40a3-ad27-477c6696e2b8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 08:39:17.441979 1296085 system_pods.go:89] "kindnet-54kcr" [7d053ff8-95e6-4954-8894-140ea137c988] Running
	I1018 08:39:17.441985 1296085 system_pods.go:89] "kube-apiserver-functional-441731" [2a368377-1e4e-4501-a7aa-d202f0f396da] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 08:39:17.441990 1296085 system_pods.go:89] "kube-controller-manager-functional-441731" [24e599f4-e3ac-439c-b6a1-eee9a765a054] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 08:39:17.441994 1296085 system_pods.go:89] "kube-proxy-lllgl" [08181884-a54c-41e4-8839-4f97c3a3cf12] Running
	I1018 08:39:17.441998 1296085 system_pods.go:89] "kube-scheduler-functional-441731" [0bc4d241-7cff-4453-b6c0-09d2a7e91c0b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 08:39:17.442001 1296085 system_pods.go:89] "storage-provisioner" [8d98d636-38fa-4717-9ac3-f3698ce74d31] Running
	I1018 08:39:17.442011 1296085 system_pods.go:126] duration metric: took 3.419066ms to wait for k8s-apps to be running ...
	I1018 08:39:17.442018 1296085 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 08:39:17.442078 1296085 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 08:39:17.460962 1296085 system_svc.go:56] duration metric: took 18.926898ms WaitForService to wait for kubelet
	I1018 08:39:17.460981 1296085 kubeadm.go:586] duration metric: took 1.117181126s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 08:39:17.461001 1296085 node_conditions.go:102] verifying NodePressure condition ...
	I1018 08:39:17.463952 1296085 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 08:39:17.463969 1296085 node_conditions.go:123] node cpu capacity is 2
	I1018 08:39:17.463979 1296085 node_conditions.go:105] duration metric: took 2.973712ms to run NodePressure ...
	I1018 08:39:17.463990 1296085 start.go:241] waiting for startup goroutines ...
	I1018 08:39:17.463996 1296085 start.go:246] waiting for cluster config update ...
	I1018 08:39:17.464006 1296085 start.go:255] writing updated cluster config ...
	I1018 08:39:17.464326 1296085 ssh_runner.go:195] Run: rm -f paused
	I1018 08:39:17.467913 1296085 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 08:39:17.475755 1296085 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vckhd" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 08:39:19.480839 1296085 pod_ready.go:104] pod "coredns-66bc5c9577-vckhd" is not "Ready", error: <nil>
	W1018 08:39:21.481493 1296085 pod_ready.go:104] pod "coredns-66bc5c9577-vckhd" is not "Ready", error: <nil>
	W1018 08:39:23.481547 1296085 pod_ready.go:104] pod "coredns-66bc5c9577-vckhd" is not "Ready", error: <nil>
	I1018 08:39:23.981622 1296085 pod_ready.go:94] pod "coredns-66bc5c9577-vckhd" is "Ready"
	I1018 08:39:23.981636 1296085 pod_ready.go:86] duration metric: took 6.505867697s for pod "coredns-66bc5c9577-vckhd" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:39:23.981648 1296085 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w4kng" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:39:23.986822 1296085 pod_ready.go:94] pod "coredns-66bc5c9577-w4kng" is "Ready"
	I1018 08:39:23.986835 1296085 pod_ready.go:86] duration metric: took 5.182416ms for pod "coredns-66bc5c9577-w4kng" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:39:23.989702 1296085 pod_ready.go:83] waiting for pod "etcd-functional-441731" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 08:39:25.994763 1296085 pod_ready.go:104] pod "etcd-functional-441731" is not "Ready", error: <nil>
	W1018 08:39:27.995590 1296085 pod_ready.go:104] pod "etcd-functional-441731" is not "Ready", error: <nil>
	I1018 08:39:29.495453 1296085 pod_ready.go:94] pod "etcd-functional-441731" is "Ready"
	I1018 08:39:29.495467 1296085 pod_ready.go:86] duration metric: took 5.505752985s for pod "etcd-functional-441731" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:39:29.497622 1296085 pod_ready.go:83] waiting for pod "kube-apiserver-functional-441731" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:39:29.501585 1296085 pod_ready.go:94] pod "kube-apiserver-functional-441731" is "Ready"
	I1018 08:39:29.501597 1296085 pod_ready.go:86] duration metric: took 3.963995ms for pod "kube-apiserver-functional-441731" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:39:29.503944 1296085 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-441731" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:39:29.508946 1296085 pod_ready.go:94] pod "kube-controller-manager-functional-441731" is "Ready"
	I1018 08:39:29.508960 1296085 pod_ready.go:86] duration metric: took 5.003967ms for pod "kube-controller-manager-functional-441731" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:39:29.511244 1296085 pod_ready.go:83] waiting for pod "kube-proxy-lllgl" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:39:29.694505 1296085 pod_ready.go:94] pod "kube-proxy-lllgl" is "Ready"
	I1018 08:39:29.694519 1296085 pod_ready.go:86] duration metric: took 183.263768ms for pod "kube-proxy-lllgl" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:39:29.894071 1296085 pod_ready.go:83] waiting for pod "kube-scheduler-functional-441731" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:39:30.293516 1296085 pod_ready.go:94] pod "kube-scheduler-functional-441731" is "Ready"
	I1018 08:39:30.293530 1296085 pod_ready.go:86] duration metric: took 399.446217ms for pod "kube-scheduler-functional-441731" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 08:39:30.293541 1296085 pod_ready.go:40] duration metric: took 12.825597516s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 08:39:30.343880 1296085 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 08:39:30.347380 1296085 out.go:179] * Done! kubectl is now configured to use "functional-441731" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 08:40:02 functional-441731 crio[3563]: time="2025-10-18T08:40:02.629947634Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-54v8f Namespace:default ID:567c993dfd3966ed05a400fe580fd6c18b9f1dd43dc22e5128d18e6b010c94d1 UID:062a3c96-1673-415a-a8e0-5e60cc4ed1ad NetNS:/var/run/netns/fa1cdd4e-46f6-4d88-b2ec-887bd116e5a0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000633ef8}] Aliases:map[]}"
	Oct 18 08:40:02 functional-441731 crio[3563]: time="2025-10-18T08:40:02.630095708Z" level=info msg="Checking pod default_hello-node-75c85bcc94-54v8f for CNI network kindnet (type=ptp)"
	Oct 18 08:40:02 functional-441731 crio[3563]: time="2025-10-18T08:40:02.633352875Z" level=info msg="Ran pod sandbox 567c993dfd3966ed05a400fe580fd6c18b9f1dd43dc22e5128d18e6b010c94d1 with infra container: default/hello-node-75c85bcc94-54v8f/POD" id=faeaa600-1a44-452d-b612-1e719842e123 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 08:40:02 functional-441731 crio[3563]: time="2025-10-18T08:40:02.637271522Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=e2ee9a9a-0893-4436-b005-76e4b22fc5e6 name=/runtime.v1.ImageService/PullImage
	Oct 18 08:40:09 functional-441731 crio[3563]: time="2025-10-18T08:40:09.683458312Z" level=info msg="Stopping pod sandbox: 9a5a999bccf2174ab7d6b623d6ed6fca5d98abafdbc7c5306e61f7e670be1b2d" id=485b56a6-da37-4b13-b838-5aa7ae778767 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 18 08:40:09 functional-441731 crio[3563]: time="2025-10-18T08:40:09.683514327Z" level=info msg="Stopped pod sandbox (already stopped): 9a5a999bccf2174ab7d6b623d6ed6fca5d98abafdbc7c5306e61f7e670be1b2d" id=485b56a6-da37-4b13-b838-5aa7ae778767 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 18 08:40:09 functional-441731 crio[3563]: time="2025-10-18T08:40:09.684065106Z" level=info msg="Removing pod sandbox: 9a5a999bccf2174ab7d6b623d6ed6fca5d98abafdbc7c5306e61f7e670be1b2d" id=3ab39b7c-1041-470e-9f8f-74d6920812c5 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 18 08:40:09 functional-441731 crio[3563]: time="2025-10-18T08:40:09.687765403Z" level=info msg="Removed pod sandbox: 9a5a999bccf2174ab7d6b623d6ed6fca5d98abafdbc7c5306e61f7e670be1b2d" id=3ab39b7c-1041-470e-9f8f-74d6920812c5 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 18 08:40:09 functional-441731 crio[3563]: time="2025-10-18T08:40:09.688364214Z" level=info msg="Stopping pod sandbox: 1e4d547b0e37af228d7fcf53092bb54863bb8f233b654f57a20d5b78cf5bacd6" id=053c97fe-4599-4d5c-be93-12347217816f name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 18 08:40:09 functional-441731 crio[3563]: time="2025-10-18T08:40:09.688469614Z" level=info msg="Stopped pod sandbox (already stopped): 1e4d547b0e37af228d7fcf53092bb54863bb8f233b654f57a20d5b78cf5bacd6" id=053c97fe-4599-4d5c-be93-12347217816f name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 18 08:40:09 functional-441731 crio[3563]: time="2025-10-18T08:40:09.688832459Z" level=info msg="Removing pod sandbox: 1e4d547b0e37af228d7fcf53092bb54863bb8f233b654f57a20d5b78cf5bacd6" id=dd3aeb3b-bada-474f-9657-523277b341d2 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 18 08:40:09 functional-441731 crio[3563]: time="2025-10-18T08:40:09.69234237Z" level=info msg="Removed pod sandbox: 1e4d547b0e37af228d7fcf53092bb54863bb8f233b654f57a20d5b78cf5bacd6" id=dd3aeb3b-bada-474f-9657-523277b341d2 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 18 08:40:09 functional-441731 crio[3563]: time="2025-10-18T08:40:09.701952128Z" level=info msg="Stopping pod sandbox: 3b3ee1b2bd7d7cdff7eb6248af751e77456f48d24c69058a2e7ab6d4c04dbfda" id=27480cb1-4be1-46ea-9cad-c31ef089101a name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 18 08:40:09 functional-441731 crio[3563]: time="2025-10-18T08:40:09.7021376Z" level=info msg="Stopped pod sandbox (already stopped): 3b3ee1b2bd7d7cdff7eb6248af751e77456f48d24c69058a2e7ab6d4c04dbfda" id=27480cb1-4be1-46ea-9cad-c31ef089101a name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 18 08:40:09 functional-441731 crio[3563]: time="2025-10-18T08:40:09.705024266Z" level=info msg="Removing pod sandbox: 3b3ee1b2bd7d7cdff7eb6248af751e77456f48d24c69058a2e7ab6d4c04dbfda" id=549a5b04-4704-44ab-b95c-d2d0550df416 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 18 08:40:09 functional-441731 crio[3563]: time="2025-10-18T08:40:09.709274538Z" level=info msg="Removed pod sandbox: 3b3ee1b2bd7d7cdff7eb6248af751e77456f48d24c69058a2e7ab6d4c04dbfda" id=549a5b04-4704-44ab-b95c-d2d0550df416 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 18 08:40:18 functional-441731 crio[3563]: time="2025-10-18T08:40:18.555395209Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=37e11a18-8fc0-492d-827c-6636e3e6c550 name=/runtime.v1.ImageService/PullImage
	Oct 18 08:40:25 functional-441731 crio[3563]: time="2025-10-18T08:40:25.555830268Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=e1db4152-50ec-4528-a253-e6faa8f27097 name=/runtime.v1.ImageService/PullImage
	Oct 18 08:40:46 functional-441731 crio[3563]: time="2025-10-18T08:40:46.555323103Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=84c5a1f6-06f0-48ef-bee7-acb78941f99a name=/runtime.v1.ImageService/PullImage
	Oct 18 08:41:06 functional-441731 crio[3563]: time="2025-10-18T08:41:06.556015098Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=6b3bb342-cc52-4baa-8171-c5b8f02b0b64 name=/runtime.v1.ImageService/PullImage
	Oct 18 08:41:40 functional-441731 crio[3563]: time="2025-10-18T08:41:40.555774804Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=c37dceae-3125-4408-83f1-0f7bd8f98c11 name=/runtime.v1.ImageService/PullImage
	Oct 18 08:42:32 functional-441731 crio[3563]: time="2025-10-18T08:42:32.555280642Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=a5c79d2f-0992-4402-a017-c5748f65eccd name=/runtime.v1.ImageService/PullImage
	Oct 18 08:43:05 functional-441731 crio[3563]: time="2025-10-18T08:43:05.55680222Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=5cc4bc14-654e-4705-b8e0-10286bc24968 name=/runtime.v1.ImageService/PullImage
	Oct 18 08:45:17 functional-441731 crio[3563]: time="2025-10-18T08:45:17.556518981Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=e49ba2d5-e640-4e44-8399-e016d537370f name=/runtime.v1.ImageService/PullImage
	Oct 18 08:45:51 functional-441731 crio[3563]: time="2025-10-18T08:45:51.556453059Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=62de17ec-0755-44f3-8326-a86d59387283 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	caf1dd968923c       docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a   9 minutes ago       Running             myfrontend                0                   d94a82b8df5a6       sp-pod                                      default
	16edb64c3f2fd       docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0   10 minutes ago      Running             nginx                     0                   8794d68281818       nginx-svc                                   default
	e3efb30f5e7d0       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  10 minutes ago      Running             coredns                   2                   33b35cd010679       coredns-66bc5c9577-w4kng                    kube-system
	86e2917462d26       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Running             kindnet-cni               2                   9f01c17f8689e       kindnet-54kcr                               kube-system
	b5921fcd8a911       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  10 minutes ago      Running             kube-proxy                2                   2bc403e590ef6       kube-proxy-lllgl                            kube-system
	0dc6790052cfc       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Running             storage-provisioner       3                   5187ed93f9caa       storage-provisioner                         kube-system
	9a0c8c6071c27       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  10 minutes ago      Running             coredns                   2                   d94d436559970       coredns-66bc5c9577-vckhd                    kube-system
	92ae40f9fc9d4       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                  10 minutes ago      Running             kube-apiserver            0                   f343b80087508       kube-apiserver-functional-441731            kube-system
	58c79003b1e8d       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  10 minutes ago      Running             kube-controller-manager   2                   771868f5bcc1f       kube-controller-manager-functional-441731   kube-system
	a5e721891deec       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  10 minutes ago      Running             kube-scheduler            2                   f4f112d8fd656       kube-scheduler-functional-441731            kube-system
	81249b9f74357       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  10 minutes ago      Running             etcd                      2                   770f76136b014       etcd-functional-441731                      kube-system
	48c052109e0b9       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  11 minutes ago      Exited              storage-provisioner       2                   5187ed93f9caa       storage-provisioner                         kube-system
	e677a07849b37       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  11 minutes ago      Exited              etcd                      1                   770f76136b014       etcd-functional-441731                      kube-system
	e6f56a540e99f       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  11 minutes ago      Exited              coredns                   1                   d94d436559970       coredns-66bc5c9577-vckhd                    kube-system
	a07b2be9bc89c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  11 minutes ago      Exited              kube-controller-manager   1                   771868f5bcc1f       kube-controller-manager-functional-441731   kube-system
	f4afb80863ad7       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  11 minutes ago      Exited              kube-scheduler            1                   f4f112d8fd656       kube-scheduler-functional-441731            kube-system
	7a21d8fd024b3       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  11 minutes ago      Exited              coredns                   1                   33b35cd010679       coredns-66bc5c9577-w4kng                    kube-system
	1556ec1257e5b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  11 minutes ago      Exited              kindnet-cni               1                   9f01c17f8689e       kindnet-54kcr                               kube-system
	f05e41e28c699       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  11 minutes ago      Exited              kube-proxy                1                   2bc403e590ef6       kube-proxy-lllgl                            kube-system
	
	
	==> coredns [7a21d8fd024b315f1907c0a46edc9d7a5e4d78182e33ce5afc780248b97e80c4] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33717 - 24284 "HINFO IN 2809026796211997695.7573419091032920117. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019814753s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9a0c8c6071c27c8161ba7e049d072290180aaebc74aaf7acbe6fd99e14e607d8] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43439 - 59106 "HINFO IN 6497137709134129486.992620879465413754. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.006013056s
	
	
	==> coredns [e3efb30f5e7d0fd03599797bddd7a0100fe310039db817d8eb4f175a4398d479] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57701 - 15744 "HINFO IN 457142350083553839.378574553889751890. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.026024519s
	
	
	==> coredns [e6f56a540e99f8a69365623157da0e92947dfefbf1dbd256d40d2c5443cdfec4] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49528 - 2611 "HINFO IN 6194141538887352029.3288725061576846961. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024738099s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-441731
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-441731
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820
	                    minikube.k8s.io/name=functional-441731
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T08_37_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 08:37:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-441731
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 08:49:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 08:49:35 +0000   Sat, 18 Oct 2025 08:37:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 08:49:35 +0000   Sat, 18 Oct 2025 08:37:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 08:49:35 +0000   Sat, 18 Oct 2025 08:37:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 08:49:35 +0000   Sat, 18 Oct 2025 08:38:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-441731
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                eb15e720-9f8e-40da-8a84-ebf600dabf59
	  Boot ID:                    65523ab2-bf15-4ba4-9086-da57024e96a9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-54v8f                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m49s
	  default                     hello-node-connect-7d85dfc575-xzmmq          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m55s
	  kube-system                 coredns-66bc5c9577-vckhd                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 coredns-66bc5c9577-w4kng                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-441731                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-54kcr                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-441731             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-441731    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-lllgl                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-441731             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-441731 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-441731 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node functional-441731 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-441731 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-441731 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-441731 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node functional-441731 event: Registered Node functional-441731 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-441731 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-441731 event: Registered Node functional-441731 in Controller
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-441731 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-441731 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-441731 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-441731 event: Registered Node functional-441731 in Controller
	
	
	==> dmesg <==
	[Oct18 08:06] overlayfs: idmapped layers are currently not supported
	[Oct18 08:08] overlayfs: idmapped layers are currently not supported
	[Oct18 08:09] overlayfs: idmapped layers are currently not supported
	[Oct18 08:10] overlayfs: idmapped layers are currently not supported
	[ +38.212735] overlayfs: idmapped layers are currently not supported
	[Oct18 08:11] overlayfs: idmapped layers are currently not supported
	[Oct18 08:12] overlayfs: idmapped layers are currently not supported
	[Oct18 08:13] overlayfs: idmapped layers are currently not supported
	[  +7.848314] overlayfs: idmapped layers are currently not supported
	[Oct18 08:14] overlayfs: idmapped layers are currently not supported
	[Oct18 08:15] overlayfs: idmapped layers are currently not supported
	[Oct18 08:16] overlayfs: idmapped layers are currently not supported
	[ +29.066776] overlayfs: idmapped layers are currently not supported
	[Oct18 08:17] overlayfs: idmapped layers are currently not supported
	[Oct18 08:18] overlayfs: idmapped layers are currently not supported
	[  +0.898927] overlayfs: idmapped layers are currently not supported
	[Oct18 08:20] overlayfs: idmapped layers are currently not supported
	[  +5.259921] overlayfs: idmapped layers are currently not supported
	[Oct18 08:22] overlayfs: idmapped layers are currently not supported
	[  +6.764143] overlayfs: idmapped layers are currently not supported
	[Oct18 08:24] overlayfs: idmapped layers are currently not supported
	[Oct18 08:29] kauditd_printk_skb: 8 callbacks suppressed
	[Oct18 08:30] overlayfs: idmapped layers are currently not supported
	[Oct18 08:36] overlayfs: idmapped layers are currently not supported
	[Oct18 08:37] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [81249b9f74357686cf4f1e966c5ee11fe24bfad71cf2fa6c981d070b54431b2d] <==
	{"level":"warn","ts":"2025-10-18T08:39:12.216675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:39:12.240907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:39:12.261760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:39:12.280546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:39:12.297073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:39:12.311013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:39:12.336487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:39:12.352661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:39:12.371166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:39:12.384209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:39:12.399953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:39:12.416427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:39:12.436646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:39:12.456759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:39:12.473658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:39:12.488612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:39:12.504513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:39:12.527689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:39:12.568297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:39:12.596154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:39:12.618475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:39:12.715443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51962","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T08:49:11.324049Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1169}
	{"level":"info","ts":"2025-10-18T08:49:11.347041Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1169,"took":"22.692086ms","hash":3649207953,"current-db-size-bytes":3436544,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":1564672,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2025-10-18T08:49:11.347098Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3649207953,"revision":1169,"compact-revision":-1}
	
	
	==> etcd [e677a07849b37cbafbd64fa560abd21dfc2f152d4c0f1081157b00f76cde159d] <==
	{"level":"warn","ts":"2025-10-18T08:38:25.948910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:38:25.966620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:38:25.990676Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:38:26.022323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:38:26.036703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:38:26.056690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:38:26.155770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34236","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T08:38:49.489030Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-18T08:38:49.489245Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-441731","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-18T08:38:49.489520Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T08:38:49.649842Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T08:38:49.649919Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T08:38:49.649953Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-18T08:38:49.650024Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-18T08:38:49.650003Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-18T08:38:49.650087Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T08:38:49.650141Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T08:38:49.650180Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-18T08:38:49.650151Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T08:38:49.650254Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T08:38:49.650290Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T08:38:49.653817Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-18T08:38:49.653901Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T08:38:49.653940Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-18T08:38:49.653947Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-441731","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 08:49:51 up 10:32,  0 user,  load average: 0.17, 0.40, 1.17
	Linux functional-441731 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1556ec1257e5b961bddfc668f174c503c9d2908d7fc0d6a0c3148c745e3d4574] <==
	I1018 08:38:22.965301       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 08:38:22.965538       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1018 08:38:22.965679       1 main.go:148] setting mtu 1500 for CNI 
	I1018 08:38:22.965690       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 08:38:22.965703       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T08:38:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 08:38:23.137315       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 08:38:23.137414       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 08:38:23.137427       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	E1018 08:38:23.138275       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 08:38:23.138819       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 08:38:23.138902       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1018 08:38:23.138977       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 08:38:23.223998       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1018 08:38:27.539915       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 08:38:27.539995       1 metrics.go:72] Registering metrics
	I1018 08:38:27.540054       1 controller.go:711] "Syncing nftables rules"
	I1018 08:38:33.137352       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:38:33.137404       1 main.go:301] handling current node
	I1018 08:38:43.137278       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:38:43.137323       1 main.go:301] handling current node
	
	
	==> kindnet [86e2917462d263e2cc55ca172d055e4c09d31cab465dfd18251749b655ccae2f] <==
	I1018 08:47:45.230534       1 main.go:301] handling current node
	I1018 08:47:55.229118       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:47:55.229263       1 main.go:301] handling current node
	I1018 08:48:05.228370       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:48:05.228405       1 main.go:301] handling current node
	I1018 08:48:15.228360       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:48:15.228453       1 main.go:301] handling current node
	I1018 08:48:25.232610       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:48:25.232644       1 main.go:301] handling current node
	I1018 08:48:35.228984       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:48:35.229020       1 main.go:301] handling current node
	I1018 08:48:45.230025       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:48:45.230237       1 main.go:301] handling current node
	I1018 08:48:55.229419       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:48:55.229531       1 main.go:301] handling current node
	I1018 08:49:05.233290       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:49:05.233395       1 main.go:301] handling current node
	I1018 08:49:15.234295       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:49:15.234332       1 main.go:301] handling current node
	I1018 08:49:25.228754       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:49:25.228789       1 main.go:301] handling current node
	I1018 08:49:35.228598       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:49:35.228631       1 main.go:301] handling current node
	I1018 08:49:45.228901       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:49:45.229043       1 main.go:301] handling current node
	
	
	==> kube-apiserver [92ae40f9fc9d4468165be38a4743362ec084f30d35f548e48f4ba2682785e73c] <==
	I1018 08:39:13.829854       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 08:39:13.830093       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1018 08:39:13.830152       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	E1018 08:39:13.835132       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 08:39:13.844864       1 cache.go:39] Caches are synced for autoregister controller
	I1018 08:39:13.847259       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 08:39:13.851432       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 08:39:13.851504       1 policy_source.go:240] refreshing policies
	I1018 08:39:13.927888       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 08:39:14.549295       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 08:39:14.631185       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 08:39:16.060361       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 08:39:16.185372       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 08:39:16.249860       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 08:39:16.257212       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 08:39:17.365026       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 08:39:17.467818       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 08:39:17.665922       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 08:39:33.841717       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.101.73.195"}
	I1018 08:39:39.204423       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.105.204.138"}
	I1018 08:39:48.885891       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.100.28.216"}
	E1018 08:39:55.348495       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:40588: use of closed network connection
	E1018 08:40:02.178502       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:40642: use of closed network connection
	I1018 08:40:02.428967       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.98.19.179"}
	I1018 08:49:13.758714       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [58c79003b1e8d89c4e6cbcbd3544cbb7c647c83bcc33121da5e93701edc5ca11] <==
	I1018 08:39:17.310915       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 08:39:17.310949       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 08:39:17.310966       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 08:39:17.315152       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 08:39:17.315215       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 08:39:17.315266       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 08:39:17.315328       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 08:39:17.322600       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 08:39:17.324818       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 08:39:17.332067       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 08:39:17.335343       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 08:39:17.339578       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 08:39:17.339680       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 08:39:17.339760       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-441731"
	I1018 08:39:17.339822       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 08:39:17.342372       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 08:39:17.344325       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 08:39:17.347244       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 08:39:17.352758       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 08:39:17.360507       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1018 08:39:17.373850       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 08:39:17.377268       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 08:39:17.380245       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 08:39:17.380274       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 08:39:17.380283       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [a07b2be9bc89cd408028d50b7c24caa38f7da75371a87c156e4efa8cc15149cd] <==
	I1018 08:38:30.634176       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 08:38:30.638372       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 08:38:30.640640       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 08:38:30.642876       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 08:38:30.644139       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 08:38:30.668422       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1018 08:38:30.668467       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 08:38:30.671658       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 08:38:30.672949       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 08:38:30.675969       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 08:38:30.676058       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 08:38:30.676649       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 08:38:30.676704       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 08:38:30.676758       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 08:38:30.676833       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-441731"
	I1018 08:38:30.676872       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 08:38:30.676917       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 08:38:30.677119       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 08:38:30.675975       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 08:38:30.677454       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 08:38:30.677523       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 08:38:30.677592       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 08:38:30.681508       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 08:38:30.682927       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 08:38:30.685384       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	
	
	==> kube-proxy [b5921fcd8a911e561b80bfcc9710cc9f7a7c076c7b9b6ca464d3aafa075c7a8d] <==
	I1018 08:39:15.084489       1 server_linux.go:53] "Using iptables proxy"
	I1018 08:39:15.242934       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 08:39:15.344080       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 08:39:15.344121       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 08:39:15.344188       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 08:39:15.364284       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 08:39:15.364390       1 server_linux.go:132] "Using iptables Proxier"
	I1018 08:39:15.368574       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 08:39:15.368966       1 server.go:527] "Version info" version="v1.34.1"
	I1018 08:39:15.369155       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 08:39:15.370370       1 config.go:200] "Starting service config controller"
	I1018 08:39:15.370580       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 08:39:15.370641       1 config.go:106] "Starting endpoint slice config controller"
	I1018 08:39:15.370706       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 08:39:15.370744       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 08:39:15.370770       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 08:39:15.373431       1 config.go:309] "Starting node config controller"
	I1018 08:39:15.374524       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 08:39:15.374574       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 08:39:15.471507       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 08:39:15.471547       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 08:39:15.471516       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [f05e41e28c6992fce639a95a5e0a268828e4f956d2687945667a42c3be707231] <==
	I1018 08:38:25.076870       1 server_linux.go:53] "Using iptables proxy"
	I1018 08:38:26.076776       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 08:38:27.462421       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 08:38:27.467910       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 08:38:27.511928       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 08:38:27.875262       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 08:38:27.875324       1 server_linux.go:132] "Using iptables Proxier"
	I1018 08:38:28.079967       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 08:38:28.129155       1 server.go:527] "Version info" version="v1.34.1"
	I1018 08:38:28.129180       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 08:38:28.131199       1 config.go:106] "Starting endpoint slice config controller"
	I1018 08:38:28.151389       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 08:38:28.137778       1 config.go:200] "Starting service config controller"
	I1018 08:38:28.151673       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 08:38:28.137960       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 08:38:28.151742       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 08:38:28.150842       1 config.go:309] "Starting node config controller"
	I1018 08:38:28.151794       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 08:38:28.151832       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 08:38:28.252451       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 08:38:28.252560       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 08:38:28.252575       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [a5e721891deec61ff93c60dfb9aa3d6ce5b3f0d84e884a19921716f5452ce339] <==
	I1018 08:39:11.986413       1 serving.go:386] Generated self-signed cert in-memory
	W1018 08:39:13.627563       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 08:39:13.627662       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 08:39:13.627699       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 08:39:13.627728       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 08:39:13.741398       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 08:39:13.743872       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 08:39:13.750482       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 08:39:13.750677       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 08:39:13.750723       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 08:39:13.750776       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1018 08:39:13.762911       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 08:39:13.763057       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 08:39:13.763148       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 08:39:13.763238       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 08:39:13.763316       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 08:39:13.763413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1018 08:39:13.851891       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [f4afb80863ad723fedb127abbe172bcf1a699592862dd1d89debcac2cfa38fbd] <==
	I1018 08:38:26.988417       1 serving.go:386] Generated self-signed cert in-memory
	I1018 08:38:28.192652       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 08:38:28.192687       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 08:38:28.198841       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 08:38:28.198943       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1018 08:38:28.198977       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1018 08:38:28.199010       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 08:38:28.202046       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 08:38:28.202187       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 08:38:28.202550       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 08:38:28.204579       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 08:38:28.299249       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1018 08:38:28.303622       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 08:38:28.304661       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 08:38:49.479185       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1018 08:38:49.479203       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1018 08:38:49.479222       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1018 08:38:49.479244       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 08:38:49.479263       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 08:38:49.479281       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1018 08:38:49.479572       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1018 08:38:49.479597       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 18 08:47:10 functional-441731 kubelet[3873]: E1018 08:47:10.555292    3873 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-54v8f" podUID="062a3c96-1673-415a-a8e0-5e60cc4ed1ad"
	Oct 18 08:47:18 functional-441731 kubelet[3873]: E1018 08:47:18.555468    3873 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-xzmmq" podUID="34c3e15d-3754-4e22-b341-024f3dc1c356"
	Oct 18 08:47:22 functional-441731 kubelet[3873]: E1018 08:47:22.555489    3873 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-54v8f" podUID="062a3c96-1673-415a-a8e0-5e60cc4ed1ad"
	Oct 18 08:47:33 functional-441731 kubelet[3873]: E1018 08:47:33.555018    3873 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-xzmmq" podUID="34c3e15d-3754-4e22-b341-024f3dc1c356"
	Oct 18 08:47:37 functional-441731 kubelet[3873]: E1018 08:47:37.555479    3873 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-54v8f" podUID="062a3c96-1673-415a-a8e0-5e60cc4ed1ad"
	Oct 18 08:47:46 functional-441731 kubelet[3873]: E1018 08:47:46.554970    3873 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-xzmmq" podUID="34c3e15d-3754-4e22-b341-024f3dc1c356"
	Oct 18 08:47:52 functional-441731 kubelet[3873]: E1018 08:47:52.555409    3873 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-54v8f" podUID="062a3c96-1673-415a-a8e0-5e60cc4ed1ad"
	Oct 18 08:47:57 functional-441731 kubelet[3873]: E1018 08:47:57.555522    3873 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-xzmmq" podUID="34c3e15d-3754-4e22-b341-024f3dc1c356"
	Oct 18 08:48:04 functional-441731 kubelet[3873]: E1018 08:48:04.555648    3873 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-54v8f" podUID="062a3c96-1673-415a-a8e0-5e60cc4ed1ad"
	Oct 18 08:48:11 functional-441731 kubelet[3873]: E1018 08:48:11.556757    3873 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-xzmmq" podUID="34c3e15d-3754-4e22-b341-024f3dc1c356"
	Oct 18 08:48:16 functional-441731 kubelet[3873]: E1018 08:48:16.554999    3873 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-54v8f" podUID="062a3c96-1673-415a-a8e0-5e60cc4ed1ad"
	Oct 18 08:48:22 functional-441731 kubelet[3873]: E1018 08:48:22.554709    3873 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-xzmmq" podUID="34c3e15d-3754-4e22-b341-024f3dc1c356"
	Oct 18 08:48:28 functional-441731 kubelet[3873]: E1018 08:48:28.554917    3873 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-54v8f" podUID="062a3c96-1673-415a-a8e0-5e60cc4ed1ad"
	Oct 18 08:48:34 functional-441731 kubelet[3873]: E1018 08:48:34.555567    3873 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-xzmmq" podUID="34c3e15d-3754-4e22-b341-024f3dc1c356"
	Oct 18 08:48:39 functional-441731 kubelet[3873]: E1018 08:48:39.556287    3873 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-54v8f" podUID="062a3c96-1673-415a-a8e0-5e60cc4ed1ad"
	Oct 18 08:48:48 functional-441731 kubelet[3873]: E1018 08:48:48.554949    3873 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-xzmmq" podUID="34c3e15d-3754-4e22-b341-024f3dc1c356"
	Oct 18 08:48:53 functional-441731 kubelet[3873]: E1018 08:48:53.555269    3873 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-54v8f" podUID="062a3c96-1673-415a-a8e0-5e60cc4ed1ad"
	Oct 18 08:49:01 functional-441731 kubelet[3873]: E1018 08:49:01.557443    3873 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-xzmmq" podUID="34c3e15d-3754-4e22-b341-024f3dc1c356"
	Oct 18 08:49:07 functional-441731 kubelet[3873]: E1018 08:49:07.554990    3873 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-54v8f" podUID="062a3c96-1673-415a-a8e0-5e60cc4ed1ad"
	Oct 18 08:49:15 functional-441731 kubelet[3873]: E1018 08:49:15.554680    3873 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-xzmmq" podUID="34c3e15d-3754-4e22-b341-024f3dc1c356"
	Oct 18 08:49:20 functional-441731 kubelet[3873]: E1018 08:49:20.556112    3873 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-54v8f" podUID="062a3c96-1673-415a-a8e0-5e60cc4ed1ad"
	Oct 18 08:49:29 functional-441731 kubelet[3873]: E1018 08:49:29.555784    3873 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-xzmmq" podUID="34c3e15d-3754-4e22-b341-024f3dc1c356"
	Oct 18 08:49:34 functional-441731 kubelet[3873]: E1018 08:49:34.555349    3873 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-54v8f" podUID="062a3c96-1673-415a-a8e0-5e60cc4ed1ad"
	Oct 18 08:49:44 functional-441731 kubelet[3873]: E1018 08:49:44.554898    3873 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-xzmmq" podUID="34c3e15d-3754-4e22-b341-024f3dc1c356"
	Oct 18 08:49:46 functional-441731 kubelet[3873]: E1018 08:49:46.555058    3873 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-54v8f" podUID="062a3c96-1673-415a-a8e0-5e60cc4ed1ad"
	
	
	==> storage-provisioner [0dc6790052cfcac7db1f52b62de9a92ecf54b516cfc6f21ab4044a3494f049cb] <==
	W1018 08:49:27.016966       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:49:29.020817       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:49:29.027315       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:49:31.030866       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:49:31.035401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:49:33.039112       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:49:33.043612       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:49:35.046879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:49:35.053624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:49:37.056872       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:49:37.061614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:49:39.064846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:49:39.071216       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:49:41.074697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:49:41.079159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:49:43.081994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:49:43.086383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:49:45.097551       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:49:45.116157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:49:47.118912       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:49:47.123505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:49:49.126946       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:49:49.132064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:49:51.135373       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:49:51.144910       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [48c052109e0b9c2b9542035d09096c063eb8b8b454b0208f8870d3d5c41b361b] <==
	I1018 08:38:37.420100       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 08:38:37.433363       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 08:38:37.434108       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 08:38:37.436603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:38:40.891511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:38:45.154804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 08:38:48.753384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-441731 -n functional-441731
helpers_test.go:269: (dbg) Run:  kubectl --context functional-441731 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-54v8f hello-node-connect-7d85dfc575-xzmmq
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-441731 describe pod hello-node-75c85bcc94-54v8f hello-node-connect-7d85dfc575-xzmmq
helpers_test.go:290: (dbg) kubectl --context functional-441731 describe pod hello-node-75c85bcc94-54v8f hello-node-connect-7d85dfc575-xzmmq:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-54v8f
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-441731/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 08:40:02 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ph759 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-ph759:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m50s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-54v8f to functional-441731
	  Normal   Pulling    6m47s (x5 over 9m50s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m47s (x5 over 9m50s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m47s (x5 over 9m50s)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m45s (x20 over 9m50s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m30s (x21 over 9m50s)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-xzmmq
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-441731/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 08:39:48 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kvdlf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-kvdlf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-xzmmq to functional-441731
	  Warning  Failed     7m20s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m20s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     5m2s (x19 over 10m)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m50s (x20 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Normal   Pulling    4m35s (x6 over 10m)   kubelet            Pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-441731 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-441731 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-54v8f" [062a3c96-1673-415a-a8e0-5e60cc4ed1ad] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1018 08:40:27.212766 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:42:43.345793 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:43:11.054317 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:47:43.344346 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-441731 -n functional-441731
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-18 08:50:02.881713154 +0000 UTC m=+1228.647127264
functional_test.go:1460: (dbg) Run:  kubectl --context functional-441731 describe po hello-node-75c85bcc94-54v8f -n default
functional_test.go:1460: (dbg) kubectl --context functional-441731 describe po hello-node-75c85bcc94-54v8f -n default:
Name:             hello-node-75c85bcc94-54v8f
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-441731/192.168.49.2
Start Time:       Sat, 18 Oct 2025 08:40:02 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ph759 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-ph759:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-54v8f to functional-441731
Normal   Pulling    6m57s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m57s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m57s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m55s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m40s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-441731 logs hello-node-75c85bcc94-54v8f -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-441731 logs hello-node-75c85bcc94-54v8f -n default: exit status 1 (107.212906ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-54v8f" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-441731 logs hello-node-75c85bcc94-54v8f -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.92s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-441731 service --namespace=default --https --url hello-node: exit status 115 (485.535387ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30276
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_mount_da162c1d0fbdf4ae29c99dba4ea7e4f1b6c8e062_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-441731 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-441731 service hello-node --url --format={{.IP}}: exit status 115 (534.793083ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-441731 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-441731 service hello-node --url: exit status 115 (480.543559ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30276
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-441731 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30276
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 image load --daemon kicbase/echo-server:functional-441731 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-441731 image load --daemon kicbase/echo-server:functional-441731 --alsologtostderr: (2.893572017s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-441731" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 image load --daemon kicbase/echo-server:functional-441731 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-arm64 -p functional-441731 image load --daemon kicbase/echo-server:functional-441731 --alsologtostderr: (1.221731836s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-441731" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-441731
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 image load --daemon kicbase/echo-server:functional-441731 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-441731" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 image save kicbase/echo-server:functional-441731 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1018 08:50:18.232104 1303587 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:50:18.232934 1303587 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:50:18.232951 1303587 out.go:374] Setting ErrFile to fd 2...
	I1018 08:50:18.232957 1303587 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:50:18.233262 1303587 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 08:50:18.234010 1303587 config.go:182] Loaded profile config "functional-441731": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:50:18.234175 1303587 config.go:182] Loaded profile config "functional-441731": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:50:18.234690 1303587 cli_runner.go:164] Run: docker container inspect functional-441731 --format={{.State.Status}}
	I1018 08:50:18.255290 1303587 ssh_runner.go:195] Run: systemctl --version
	I1018 08:50:18.255358 1303587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-441731
	I1018 08:50:18.275008 1303587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34601 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/functional-441731/id_rsa Username:docker}
	I1018 08:50:18.378943 1303587 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1018 08:50:18.379035 1303587 cache_images.go:254] Failed to load cached images for "functional-441731": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1018 08:50:18.379058 1303587 cache_images.go:266] failed pushing to: functional-441731

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-441731
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 image save --daemon kicbase/echo-server:functional-441731 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-441731
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-441731: exit status 1 (22.791234ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-441731

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-441731

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.44s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.89s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-274041 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-274041 --output=json --user=testUser: exit status 80 (1.887005178s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a1cfa7f3-dabe-4139-9a24-f7a60f1baaa0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-274041 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"5bde3b9a-b210-42e3-9465-8bcd1820cfb8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-18T09:03:22Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"58cbb580-7bd2-43c4-8ed5-a5254a010606","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-274041 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.89s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.85s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-274041 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-274041 --output=json --user=testUser: exit status 80 (1.847877531s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c802053b-d6e2-4983-9ba7-103b6dd26655","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-274041 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"5b374b4d-b090-4872-8158-8c01274747f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-18T09:03:24Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"96f53ccf-5b06-4725-a8ad-627b016d29a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-274041 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.85s)

                                                
                                    
x
+
TestPause/serial/Pause (8.51s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-285945 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-285945 --alsologtostderr -v=5: exit status 80 (2.295870513s)

                                                
                                                
-- stdout --
	* Pausing node pause-285945 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:21:44.830973 1416953 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:21:44.831187 1416953 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:21:44.831210 1416953 out.go:374] Setting ErrFile to fd 2...
	I1018 09:21:44.831228 1416953 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:21:44.831499 1416953 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 09:21:44.831764 1416953 out.go:368] Setting JSON to false
	I1018 09:21:44.831807 1416953 mustload.go:65] Loading cluster: pause-285945
	I1018 09:21:44.832286 1416953 config.go:182] Loaded profile config "pause-285945": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:21:44.832788 1416953 cli_runner.go:164] Run: docker container inspect pause-285945 --format={{.State.Status}}
	I1018 09:21:44.861466 1416953 host.go:66] Checking if "pause-285945" exists ...
	I1018 09:21:44.861800 1416953 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:21:44.969209 1416953 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:56 SystemTime:2025-10-18 09:21:44.950769189 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:21:44.969888 1416953 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-285945 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1018 09:21:44.974715 1416953 out.go:179] * Pausing node pause-285945 ... 
	I1018 09:21:44.979202 1416953 host.go:66] Checking if "pause-285945" exists ...
	I1018 09:21:44.979547 1416953 ssh_runner.go:195] Run: systemctl --version
	I1018 09:21:44.979605 1416953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-285945
	I1018 09:21:45.031186 1416953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34796 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/pause-285945/id_rsa Username:docker}
	I1018 09:21:45.226767 1416953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:21:45.246167 1416953 pause.go:52] kubelet running: true
	I1018 09:21:45.246247 1416953 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:21:45.587889 1416953 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:21:45.587989 1416953 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:21:45.687238 1416953 cri.go:89] found id: "398d36b13bd728faa6adfbd6e855eb74c2571e3dfabd6e96b83dce6fa189e992"
	I1018 09:21:45.687274 1416953 cri.go:89] found id: "37cc300384b47fa2f70b63b8137d532c953350a3a0e89ad417d892787bec53de"
	I1018 09:21:45.687292 1416953 cri.go:89] found id: "4cb85d8cc7f14d8d9d246217cd05a39d1173d04b8258cd991ce09ef6653e3e56"
	I1018 09:21:45.687298 1416953 cri.go:89] found id: "33dfc732a6724d34a5a3468b14cb0196df6201d71c0ade7debd448b49d51e4a1"
	I1018 09:21:45.687301 1416953 cri.go:89] found id: "11700d96acff9466670c5cf8030c00176989ee1d0ac5ea3f9e31d40694f9219a"
	I1018 09:21:45.687306 1416953 cri.go:89] found id: "cc61a4a23f0644850ab9a93b718afbf434bfd7dabb72b71c31d514a06ab41dd6"
	I1018 09:21:45.687309 1416953 cri.go:89] found id: "f35c8e372dcf6c00ea78ebab8b256123203f31af973dfc78436329501af16b2d"
	I1018 09:21:45.687312 1416953 cri.go:89] found id: "1a4e64037be19372aa7d12c0611a808493277713d7879148571a9fd55986faa2"
	I1018 09:21:45.687315 1416953 cri.go:89] found id: "21581abd06b4468f6862b749514951b88fa19a9799c250033bae2d5038769a0e"
	I1018 09:21:45.687320 1416953 cri.go:89] found id: "0b6f4c5d68f5776db514c3650ffb5153bc00f2908fa3e687038271e781876444"
	I1018 09:21:45.687323 1416953 cri.go:89] found id: "34cd6896ef08a5ecb37b5e68f208db427fb410a5f74cc5675a230e467bd084c2"
	I1018 09:21:45.687334 1416953 cri.go:89] found id: "0538a9e91af3c2594858d5901fc22b2cc12438ad7f8b9e27aac99c9ed1080c70"
	I1018 09:21:45.687338 1416953 cri.go:89] found id: "0280c850869e7c23b1ebc23ed077a035b68a823b0fcc56ddd5a24a101be5ea92"
	I1018 09:21:45.687342 1416953 cri.go:89] found id: "6a420152c7a36a00d7bff2513f0738078c0e95b7ddbdd097bb451e18a06c3cb4"
	I1018 09:21:45.687351 1416953 cri.go:89] found id: ""
	I1018 09:21:45.687399 1416953 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:21:45.706012 1416953 retry.go:31] will retry after 192.318847ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:21:45Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:21:45.899449 1416953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:21:45.914702 1416953 pause.go:52] kubelet running: false
	I1018 09:21:45.914764 1416953 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:21:46.105240 1416953 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:21:46.105335 1416953 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:21:46.178339 1416953 cri.go:89] found id: "398d36b13bd728faa6adfbd6e855eb74c2571e3dfabd6e96b83dce6fa189e992"
	I1018 09:21:46.178367 1416953 cri.go:89] found id: "37cc300384b47fa2f70b63b8137d532c953350a3a0e89ad417d892787bec53de"
	I1018 09:21:46.178372 1416953 cri.go:89] found id: "4cb85d8cc7f14d8d9d246217cd05a39d1173d04b8258cd991ce09ef6653e3e56"
	I1018 09:21:46.178376 1416953 cri.go:89] found id: "33dfc732a6724d34a5a3468b14cb0196df6201d71c0ade7debd448b49d51e4a1"
	I1018 09:21:46.178379 1416953 cri.go:89] found id: "11700d96acff9466670c5cf8030c00176989ee1d0ac5ea3f9e31d40694f9219a"
	I1018 09:21:46.178386 1416953 cri.go:89] found id: "cc61a4a23f0644850ab9a93b718afbf434bfd7dabb72b71c31d514a06ab41dd6"
	I1018 09:21:46.178389 1416953 cri.go:89] found id: "f35c8e372dcf6c00ea78ebab8b256123203f31af973dfc78436329501af16b2d"
	I1018 09:21:46.178393 1416953 cri.go:89] found id: "1a4e64037be19372aa7d12c0611a808493277713d7879148571a9fd55986faa2"
	I1018 09:21:46.178396 1416953 cri.go:89] found id: "21581abd06b4468f6862b749514951b88fa19a9799c250033bae2d5038769a0e"
	I1018 09:21:46.178407 1416953 cri.go:89] found id: "0b6f4c5d68f5776db514c3650ffb5153bc00f2908fa3e687038271e781876444"
	I1018 09:21:46.178410 1416953 cri.go:89] found id: "34cd6896ef08a5ecb37b5e68f208db427fb410a5f74cc5675a230e467bd084c2"
	I1018 09:21:46.178414 1416953 cri.go:89] found id: "0538a9e91af3c2594858d5901fc22b2cc12438ad7f8b9e27aac99c9ed1080c70"
	I1018 09:21:46.178417 1416953 cri.go:89] found id: "0280c850869e7c23b1ebc23ed077a035b68a823b0fcc56ddd5a24a101be5ea92"
	I1018 09:21:46.178425 1416953 cri.go:89] found id: "6a420152c7a36a00d7bff2513f0738078c0e95b7ddbdd097bb451e18a06c3cb4"
	I1018 09:21:46.178433 1416953 cri.go:89] found id: ""
	I1018 09:21:46.178483 1416953 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:21:46.194377 1416953 retry.go:31] will retry after 472.408618ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:21:46Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:21:46.666985 1416953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:21:46.686072 1416953 pause.go:52] kubelet running: false
	I1018 09:21:46.686137 1416953 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:21:46.881487 1416953 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:21:46.881580 1416953 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:21:47.005795 1416953 cri.go:89] found id: "398d36b13bd728faa6adfbd6e855eb74c2571e3dfabd6e96b83dce6fa189e992"
	I1018 09:21:47.005821 1416953 cri.go:89] found id: "37cc300384b47fa2f70b63b8137d532c953350a3a0e89ad417d892787bec53de"
	I1018 09:21:47.005828 1416953 cri.go:89] found id: "4cb85d8cc7f14d8d9d246217cd05a39d1173d04b8258cd991ce09ef6653e3e56"
	I1018 09:21:47.005831 1416953 cri.go:89] found id: "33dfc732a6724d34a5a3468b14cb0196df6201d71c0ade7debd448b49d51e4a1"
	I1018 09:21:47.005835 1416953 cri.go:89] found id: "11700d96acff9466670c5cf8030c00176989ee1d0ac5ea3f9e31d40694f9219a"
	I1018 09:21:47.005839 1416953 cri.go:89] found id: "cc61a4a23f0644850ab9a93b718afbf434bfd7dabb72b71c31d514a06ab41dd6"
	I1018 09:21:47.005842 1416953 cri.go:89] found id: "f35c8e372dcf6c00ea78ebab8b256123203f31af973dfc78436329501af16b2d"
	I1018 09:21:47.005845 1416953 cri.go:89] found id: "1a4e64037be19372aa7d12c0611a808493277713d7879148571a9fd55986faa2"
	I1018 09:21:47.005848 1416953 cri.go:89] found id: "21581abd06b4468f6862b749514951b88fa19a9799c250033bae2d5038769a0e"
	I1018 09:21:47.005912 1416953 cri.go:89] found id: "0b6f4c5d68f5776db514c3650ffb5153bc00f2908fa3e687038271e781876444"
	I1018 09:21:47.005919 1416953 cri.go:89] found id: "34cd6896ef08a5ecb37b5e68f208db427fb410a5f74cc5675a230e467bd084c2"
	I1018 09:21:47.005922 1416953 cri.go:89] found id: "0538a9e91af3c2594858d5901fc22b2cc12438ad7f8b9e27aac99c9ed1080c70"
	I1018 09:21:47.005925 1416953 cri.go:89] found id: "0280c850869e7c23b1ebc23ed077a035b68a823b0fcc56ddd5a24a101be5ea92"
	I1018 09:21:47.005966 1416953 cri.go:89] found id: "6a420152c7a36a00d7bff2513f0738078c0e95b7ddbdd097bb451e18a06c3cb4"
	I1018 09:21:47.005972 1416953 cri.go:89] found id: ""
	I1018 09:21:47.006061 1416953 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:21:47.030802 1416953 out.go:203] 
	W1018 09:21:47.034087 1416953 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:21:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:21:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:21:47.034160 1416953 out.go:285] * 
	* 
	W1018 09:21:47.043731 1416953 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:21:47.047445 1416953 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-285945 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-285945
helpers_test.go:243: (dbg) docker inspect pause-285945:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "42079d512de739891f5cc227cfc7d01b1a19121425e4fc9c2ae122ee4df2906f",
	        "Created": "2025-10-18T09:19:49.138839963Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1407514,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:19:49.228376657Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/42079d512de739891f5cc227cfc7d01b1a19121425e4fc9c2ae122ee4df2906f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/42079d512de739891f5cc227cfc7d01b1a19121425e4fc9c2ae122ee4df2906f/hostname",
	        "HostsPath": "/var/lib/docker/containers/42079d512de739891f5cc227cfc7d01b1a19121425e4fc9c2ae122ee4df2906f/hosts",
	        "LogPath": "/var/lib/docker/containers/42079d512de739891f5cc227cfc7d01b1a19121425e4fc9c2ae122ee4df2906f/42079d512de739891f5cc227cfc7d01b1a19121425e4fc9c2ae122ee4df2906f-json.log",
	        "Name": "/pause-285945",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-285945:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-285945",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "42079d512de739891f5cc227cfc7d01b1a19121425e4fc9c2ae122ee4df2906f",
	                "LowerDir": "/var/lib/docker/overlay2/c956b474b6a982fa19d88a03d4304919dca165ff1e06929307f434a31ddc26e5-init/diff:/var/lib/docker/overlay2/60519750c2737db0dfdb37adf468f6414129c2b5dcb7218ea59afbc63bfd537f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c956b474b6a982fa19d88a03d4304919dca165ff1e06929307f434a31ddc26e5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c956b474b6a982fa19d88a03d4304919dca165ff1e06929307f434a31ddc26e5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c956b474b6a982fa19d88a03d4304919dca165ff1e06929307f434a31ddc26e5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-285945",
	                "Source": "/var/lib/docker/volumes/pause-285945/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-285945",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-285945",
	                "name.minikube.sigs.k8s.io": "pause-285945",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "38b55645d5aba7bfea236b7ade020953c4838ab97b2b830159f77ea92fd4161d",
	            "SandboxKey": "/var/run/docker/netns/38b55645d5ab",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34796"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34797"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34800"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34798"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34799"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-285945": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:12:c2:12:b8:7d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "761f4f18d66806570520fa98264eb0932cfe4b4b047482b5c705ac244895e541",
	                    "EndpointID": "f19a6d658b1f8a5f7dea7adfe094eed7aec2b2c68264e054e886b4e27a28ee3c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-285945",
	                        "42079d512de7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-285945 -n pause-285945
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-285945 -n pause-285945: exit status 2 (467.68177ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-285945 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-285945 logs -n 25: (1.877067603s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                         ARGS                                                          │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-823174 --schedule 5m                                                                                │ scheduled-stop-823174       │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │                     │
	│ stop    │ -p scheduled-stop-823174 --schedule 15s                                                                               │ scheduled-stop-823174       │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │                     │
	│ stop    │ -p scheduled-stop-823174 --schedule 15s                                                                               │ scheduled-stop-823174       │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │                     │
	│ stop    │ -p scheduled-stop-823174 --schedule 15s                                                                               │ scheduled-stop-823174       │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │                     │
	│ stop    │ -p scheduled-stop-823174 --cancel-scheduled                                                                           │ scheduled-stop-823174       │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │ 18 Oct 25 09:18 UTC │
	│ stop    │ -p scheduled-stop-823174 --schedule 15s                                                                               │ scheduled-stop-823174       │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │                     │
	│ stop    │ -p scheduled-stop-823174 --schedule 15s                                                                               │ scheduled-stop-823174       │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │                     │
	│ stop    │ -p scheduled-stop-823174 --schedule 15s                                                                               │ scheduled-stop-823174       │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │ 18 Oct 25 09:18 UTC │
	│ delete  │ -p scheduled-stop-823174                                                                                              │ scheduled-stop-823174       │ jenkins │ v1.37.0 │ 18 Oct 25 09:19 UTC │ 18 Oct 25 09:19 UTC │
	│ start   │ -p insufficient-storage-194172 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio      │ insufficient-storage-194172 │ jenkins │ v1.37.0 │ 18 Oct 25 09:19 UTC │                     │
	│ delete  │ -p insufficient-storage-194172                                                                                        │ insufficient-storage-194172 │ jenkins │ v1.37.0 │ 18 Oct 25 09:19 UTC │ 18 Oct 25 09:19 UTC │
	│ start   │ -p NoKubernetes-035766 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio         │ NoKubernetes-035766         │ jenkins │ v1.37.0 │ 18 Oct 25 09:19 UTC │                     │
	│ start   │ -p pause-285945 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio             │ pause-285945                │ jenkins │ v1.37.0 │ 18 Oct 25 09:19 UTC │ 18 Oct 25 09:21 UTC │
	│ start   │ -p NoKubernetes-035766 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                 │ NoKubernetes-035766         │ jenkins │ v1.37.0 │ 18 Oct 25 09:19 UTC │ 18 Oct 25 09:20 UTC │
	│ start   │ -p NoKubernetes-035766 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ NoKubernetes-035766         │ jenkins │ v1.37.0 │ 18 Oct 25 09:20 UTC │ 18 Oct 25 09:20 UTC │
	│ delete  │ -p NoKubernetes-035766                                                                                                │ NoKubernetes-035766         │ jenkins │ v1.37.0 │ 18 Oct 25 09:21 UTC │ 18 Oct 25 09:21 UTC │
	│ start   │ -p NoKubernetes-035766 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ NoKubernetes-035766         │ jenkins │ v1.37.0 │ 18 Oct 25 09:21 UTC │ 18 Oct 25 09:21 UTC │
	│ start   │ -p pause-285945 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                      │ pause-285945                │ jenkins │ v1.37.0 │ 18 Oct 25 09:21 UTC │ 18 Oct 25 09:21 UTC │
	│ ssh     │ -p NoKubernetes-035766 sudo systemctl is-active --quiet service kubelet                                               │ NoKubernetes-035766         │ jenkins │ v1.37.0 │ 18 Oct 25 09:21 UTC │                     │
	│ stop    │ -p NoKubernetes-035766                                                                                                │ NoKubernetes-035766         │ jenkins │ v1.37.0 │ 18 Oct 25 09:21 UTC │ 18 Oct 25 09:21 UTC │
	│ start   │ -p NoKubernetes-035766 --driver=docker  --container-runtime=crio                                                      │ NoKubernetes-035766         │ jenkins │ v1.37.0 │ 18 Oct 25 09:21 UTC │ 18 Oct 25 09:21 UTC │
	│ ssh     │ -p NoKubernetes-035766 sudo systemctl is-active --quiet service kubelet                                               │ NoKubernetes-035766         │ jenkins │ v1.37.0 │ 18 Oct 25 09:21 UTC │                     │
	│ delete  │ -p NoKubernetes-035766                                                                                                │ NoKubernetes-035766         │ jenkins │ v1.37.0 │ 18 Oct 25 09:21 UTC │ 18 Oct 25 09:21 UTC │
	│ start   │ -p missing-upgrade-995648 --memory=3072 --driver=docker  --container-runtime=crio                                     │ missing-upgrade-995648      │ jenkins │ v1.32.0 │ 18 Oct 25 09:21 UTC │                     │
	│ pause   │ -p pause-285945 --alsologtostderr -v=5                                                                                │ pause-285945                │ jenkins │ v1.37.0 │ 18 Oct 25 09:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:21:26
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:21:26.870281 1416707 out.go:296] Setting OutFile to fd 1 ...
	I1018 09:21:26.870404 1416707 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1018 09:21:26.870408 1416707 out.go:309] Setting ErrFile to fd 2...
	I1018 09:21:26.870413 1416707 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1018 09:21:26.870651 1416707 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 09:21:26.871004 1416707 out.go:303] Setting JSON to false
	I1018 09:21:26.871954 1416707 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":39834,"bootTime":1760739453,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1018 09:21:26.872012 1416707 start.go:138] virtualization:  
	I1018 09:21:26.876118 1416707 out.go:177] * [missing-upgrade-995648] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1018 09:21:26.879181 1416707 out.go:177]   - MINIKUBE_LOCATION=21767
	I1018 09:21:26.882170 1416707 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:21:26.879304 1416707 notify.go:220] Checking for updates...
	I1018 09:21:26.888160 1416707 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 09:21:26.891146 1416707 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-1274243/.minikube
	I1018 09:21:26.894127 1416707 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 09:21:26.897079 1416707 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:21:26.900592 1416707 config.go:182] Loaded profile config "pause-285945": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:21:26.900682 1416707 driver.go:378] Setting default libvirt URI to qemu:///system
	I1018 09:21:26.940861 1416707 docker.go:122] docker version: linux-28.1.1:Docker Engine - Community
	I1018 09:21:26.940955 1416707 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:21:27.006879 1416707 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/last_update_check: {Name:mk8630e82d195d5d83e403578065c22edb09c0cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:21:27.010723 1416707 out.go:177] * minikube 1.37.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.37.0
	I1018 09:21:27.013585 1416707 out.go:177] * To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	I1018 09:21:27.044824 1416707 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-18 09:21:27.032373049 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:21:27.044922 1416707 docker.go:295] overlay module found
	I1018 09:21:27.048056 1416707 out.go:177] * Using the docker driver based on user configuration
	I1018 09:21:27.050800 1416707 start.go:298] selected driver: docker
	I1018 09:21:27.050816 1416707 start.go:902] validating driver "docker" against <nil>
	I1018 09:21:27.050828 1416707 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:21:27.051455 1416707 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:21:27.156960 1416707 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-18 09:21:27.144276897 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:21:27.157102 1416707 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1018 09:21:27.157311 1416707 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1018 09:21:27.160366 1416707 out.go:177] * Using Docker driver with root privileges
	I1018 09:21:27.163336 1416707 cni.go:84] Creating CNI manager for ""
	I1018 09:21:27.163349 1416707 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:21:27.163359 1416707 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 09:21:27.163369 1416707 start_flags.go:323] config:
	{Name:missing-upgrade-995648 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:missing-upgrade-995648 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1018 09:21:27.166407 1416707 out.go:177] * Starting control plane node missing-upgrade-995648 in cluster missing-upgrade-995648
	I1018 09:21:27.169172 1416707 cache.go:121] Beginning downloading kic base image for docker with crio
	I1018 09:21:27.171917 1416707 out.go:177] * Pulling base image ...
	I1018 09:21:27.174595 1416707 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1018 09:21:27.174784 1416707 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1018 09:21:27.205956 1416707 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 to local cache
	I1018 09:21:27.206135 1416707 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local cache directory
	I1018 09:21:27.206167 1416707 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 to local cache
	I1018 09:21:27.229729 1416707 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4
	I1018 09:21:27.229744 1416707 cache.go:56] Caching tarball of preloaded images
	I1018 09:21:27.229892 1416707 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1018 09:21:27.233088 1416707 out.go:177] * Downloading Kubernetes v1.28.3 preload ...
	I1018 09:21:30.060788 1413875 node_ready.go:49] node "pause-285945" is "Ready"
	I1018 09:21:30.060819 1413875 node_ready.go:38] duration metric: took 6.205033798s for node "pause-285945" to be "Ready" ...
	I1018 09:21:30.060839 1413875 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:21:30.060923 1413875 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:21:30.161675 1413875 api_server.go:72] duration metric: took 6.627315711s to wait for apiserver process to appear ...
	I1018 09:21:30.161705 1413875 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:21:30.161746 1413875 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:21:30.193808 1413875 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:21:30.193897 1413875 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:21:30.666878 1413875 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:21:30.708065 1413875 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:21:30.708090 1413875 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:21:27.235934 1416707 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4 ...
	I1018 09:21:27.328237 1416707 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4?checksum=md5:3fdaeefa2c0cc3e046170ba83ccf0cac -> /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4
	I1018 09:21:31.803911 1416707 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4 ...
	I1018 09:21:31.804031 1416707 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4 ...
	I1018 09:21:31.162024 1413875 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:21:31.172030 1413875 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:21:31.172060 1413875 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:21:31.662693 1413875 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:21:31.671069 1413875 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 09:21:31.672149 1413875 api_server.go:141] control plane version: v1.34.1
	I1018 09:21:31.672173 1413875 api_server.go:131] duration metric: took 1.510460359s to wait for apiserver health ...
	I1018 09:21:31.672181 1413875 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:21:31.675916 1413875 system_pods.go:59] 7 kube-system pods found
	I1018 09:21:31.675953 1413875 system_pods.go:61] "coredns-66bc5c9577-ch44s" [7bc5ae75-40ea-4059-a024-c849931795f7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:21:31.675962 1413875 system_pods.go:61] "etcd-pause-285945" [6bb1df1b-6746-43b0-83ca-0c57aab670f1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:21:31.675968 1413875 system_pods.go:61] "kindnet-5mqfk" [36f47490-1959-4b2b-ad86-324d964ab8c0] Running
	I1018 09:21:31.675976 1413875 system_pods.go:61] "kube-apiserver-pause-285945" [48432868-955c-4f60-929c-8cd5681ffa70] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:21:31.675988 1413875 system_pods.go:61] "kube-controller-manager-pause-285945" [732686a8-1f86-4c8a-8164-5858e7530690] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:21:31.675998 1413875 system_pods.go:61] "kube-proxy-gqf7g" [ac408112-ba80-4c63-bfa7-1eb56aa91129] Running
	I1018 09:21:31.676005 1413875 system_pods.go:61] "kube-scheduler-pause-285945" [a51ddc5a-4318-4211-9360-62b161b6dc3d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:21:31.676015 1413875 system_pods.go:74] duration metric: took 3.82734ms to wait for pod list to return data ...
	I1018 09:21:31.676024 1413875 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:21:31.678474 1413875 default_sa.go:45] found service account: "default"
	I1018 09:21:31.678501 1413875 default_sa.go:55] duration metric: took 2.466615ms for default service account to be created ...
	I1018 09:21:31.678510 1413875 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 09:21:31.681215 1413875 system_pods.go:86] 7 kube-system pods found
	I1018 09:21:31.681249 1413875 system_pods.go:89] "coredns-66bc5c9577-ch44s" [7bc5ae75-40ea-4059-a024-c849931795f7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:21:31.681258 1413875 system_pods.go:89] "etcd-pause-285945" [6bb1df1b-6746-43b0-83ca-0c57aab670f1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:21:31.681264 1413875 system_pods.go:89] "kindnet-5mqfk" [36f47490-1959-4b2b-ad86-324d964ab8c0] Running
	I1018 09:21:31.681271 1413875 system_pods.go:89] "kube-apiserver-pause-285945" [48432868-955c-4f60-929c-8cd5681ffa70] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:21:31.681278 1413875 system_pods.go:89] "kube-controller-manager-pause-285945" [732686a8-1f86-4c8a-8164-5858e7530690] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:21:31.681305 1413875 system_pods.go:89] "kube-proxy-gqf7g" [ac408112-ba80-4c63-bfa7-1eb56aa91129] Running
	I1018 09:21:31.681313 1413875 system_pods.go:89] "kube-scheduler-pause-285945" [a51ddc5a-4318-4211-9360-62b161b6dc3d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:21:31.681324 1413875 system_pods.go:126] duration metric: took 2.807906ms to wait for k8s-apps to be running ...
	I1018 09:21:31.681332 1413875 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 09:21:31.681391 1413875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:21:31.695343 1413875 system_svc.go:56] duration metric: took 13.995667ms WaitForService to wait for kubelet
	I1018 09:21:31.695373 1413875 kubeadm.go:586] duration metric: took 8.161020589s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:21:31.695391 1413875 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:21:31.698406 1413875 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 09:21:31.698438 1413875 node_conditions.go:123] node cpu capacity is 2
	I1018 09:21:31.698450 1413875 node_conditions.go:105] duration metric: took 3.053593ms to run NodePressure ...
	I1018 09:21:31.698462 1413875 start.go:241] waiting for startup goroutines ...
	I1018 09:21:31.698469 1413875 start.go:246] waiting for cluster config update ...
	I1018 09:21:31.698478 1413875 start.go:255] writing updated cluster config ...
	I1018 09:21:31.698777 1413875 ssh_runner.go:195] Run: rm -f paused
	I1018 09:21:31.704761 1413875 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:21:31.705252 1413875 kapi.go:59] client config for pause-285945: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/pause-285945/client.crt", KeyFile:"/home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/pause-285945/client.key", CAFile:"/home/jenkins/minikube-integration/21767-1274243/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 09:21:31.709401 1413875 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ch44s" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 09:21:33.716543 1413875 pod_ready.go:104] pod "coredns-66bc5c9577-ch44s" is not "Ready", error: <nil>
	I1018 09:21:32.690992 1416707 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 as a tarball
	I1018 09:21:32.691004 1416707 cache.go:162] Loading gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 from local cache
	I1018 09:21:33.062294 1416707 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1018 09:21:33.067710 1416707 profile.go:148] Saving config to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/missing-upgrade-995648/config.json ...
	I1018 09:21:33.067756 1416707 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/missing-upgrade-995648/config.json: {Name:mkfc456014863ee15198df9815eb51dba8e2ce4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W1018 09:21:36.215372 1413875 pod_ready.go:104] pod "coredns-66bc5c9577-ch44s" is not "Ready", error: <nil>
	W1018 09:21:38.215646 1413875 pod_ready.go:104] pod "coredns-66bc5c9577-ch44s" is not "Ready", error: <nil>
	W1018 09:21:40.216339 1413875 pod_ready.go:104] pod "coredns-66bc5c9577-ch44s" is not "Ready", error: <nil>
	W1018 09:21:42.219381 1413875 pod_ready.go:104] pod "coredns-66bc5c9577-ch44s" is not "Ready", error: <nil>
	I1018 09:21:43.215738 1413875 pod_ready.go:94] pod "coredns-66bc5c9577-ch44s" is "Ready"
	I1018 09:21:43.215761 1413875 pod_ready.go:86] duration metric: took 11.506324204s for pod "coredns-66bc5c9577-ch44s" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:21:43.219124 1413875 pod_ready.go:83] waiting for pod "etcd-pause-285945" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:21:43.225083 1413875 pod_ready.go:94] pod "etcd-pause-285945" is "Ready"
	I1018 09:21:43.225113 1413875 pod_ready.go:86] duration metric: took 5.967187ms for pod "etcd-pause-285945" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:21:43.229054 1413875 pod_ready.go:83] waiting for pod "kube-apiserver-pause-285945" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:21:43.234075 1413875 pod_ready.go:94] pod "kube-apiserver-pause-285945" is "Ready"
	I1018 09:21:43.234153 1413875 pod_ready.go:86] duration metric: took 5.067979ms for pod "kube-apiserver-pause-285945" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:21:43.236607 1413875 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-285945" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:21:43.413679 1413875 pod_ready.go:94] pod "kube-controller-manager-pause-285945" is "Ready"
	I1018 09:21:43.413707 1413875 pod_ready.go:86] duration metric: took 177.07744ms for pod "kube-controller-manager-pause-285945" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:21:43.613321 1413875 pod_ready.go:83] waiting for pod "kube-proxy-gqf7g" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:21:44.014135 1413875 pod_ready.go:94] pod "kube-proxy-gqf7g" is "Ready"
	I1018 09:21:44.014165 1413875 pod_ready.go:86] duration metric: took 400.814866ms for pod "kube-proxy-gqf7g" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:21:44.213198 1413875 pod_ready.go:83] waiting for pod "kube-scheduler-pause-285945" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:21:44.613917 1413875 pod_ready.go:94] pod "kube-scheduler-pause-285945" is "Ready"
	I1018 09:21:44.613943 1413875 pod_ready.go:86] duration metric: took 400.721405ms for pod "kube-scheduler-pause-285945" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:21:44.613955 1413875 pod_ready.go:40] duration metric: took 12.909166206s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:21:44.696019 1413875 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 09:21:44.707995 1413875 out.go:179] * Done! kubectl is now configured to use "pause-285945" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 09:21:21 pause-285945 crio[2089]: time="2025-10-18T09:21:21.169694893Z" level=info msg="Starting container: 11700d96acff9466670c5cf8030c00176989ee1d0ac5ea3f9e31d40694f9219a" id=60c1e74f-6654-4adc-82d5-74a77dd4689c name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:21:21 pause-285945 crio[2089]: time="2025-10-18T09:21:21.184568567Z" level=info msg="Started container" PID=2191 containerID=33dfc732a6724d34a5a3468b14cb0196df6201d71c0ade7debd448b49d51e4a1 description=kube-system/kindnet-5mqfk/kindnet-cni id=a6016aa2-d3d6-4ec4-aff8-750b42d415fa name=/runtime.v1.RuntimeService/StartContainer sandboxID=ab5309f61d46ceac3bf534b71f824672eb9505e25e4c7b4bdcf3d915c173059b
	Oct 18 09:21:21 pause-285945 crio[2089]: time="2025-10-18T09:21:21.184799132Z" level=info msg="Started container" PID=2186 containerID=4cb85d8cc7f14d8d9d246217cd05a39d1173d04b8258cd991ce09ef6653e3e56 description=kube-system/etcd-pause-285945/etcd id=84c803f2-06e4-4c2b-a9bd-e953300511d8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=38d536a046c058042ceceb4d8a98b3f05abff4ee290e2d876e06116124fecbd1
	Oct 18 09:21:21 pause-285945 crio[2089]: time="2025-10-18T09:21:21.19372097Z" level=info msg="Created container 398d36b13bd728faa6adfbd6e855eb74c2571e3dfabd6e96b83dce6fa189e992: kube-system/kube-proxy-gqf7g/kube-proxy" id=0dcf6988-1c14-4b66-99be-03832fd725e2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:21:21 pause-285945 crio[2089]: time="2025-10-18T09:21:21.19401729Z" level=info msg="Created container 37cc300384b47fa2f70b63b8137d532c953350a3a0e89ad417d892787bec53de: kube-system/kube-apiserver-pause-285945/kube-apiserver" id=ed6acad8-c779-4775-ad33-0f19cb8e326e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:21:21 pause-285945 crio[2089]: time="2025-10-18T09:21:21.195325595Z" level=info msg="Starting container: 398d36b13bd728faa6adfbd6e855eb74c2571e3dfabd6e96b83dce6fa189e992" id=1d1d4a37-6f68-4758-a541-2693585895f7 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:21:21 pause-285945 crio[2089]: time="2025-10-18T09:21:21.202706815Z" level=info msg="Starting container: 37cc300384b47fa2f70b63b8137d532c953350a3a0e89ad417d892787bec53de" id=c9a12835-c2d2-407a-b8a1-49105a4b954b name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:21:21 pause-285945 crio[2089]: time="2025-10-18T09:21:21.207794551Z" level=info msg="Started container" PID=2172 containerID=11700d96acff9466670c5cf8030c00176989ee1d0ac5ea3f9e31d40694f9219a description=kube-system/kube-controller-manager-pause-285945/kube-controller-manager id=60c1e74f-6654-4adc-82d5-74a77dd4689c name=/runtime.v1.RuntimeService/StartContainer sandboxID=2703c8ae03b3edc1ec788a74a49f01555c2c2b096b91cdd4a5e73c8722b0e08c
	Oct 18 09:21:21 pause-285945 crio[2089]: time="2025-10-18T09:21:21.210073192Z" level=info msg="Started container" PID=2204 containerID=37cc300384b47fa2f70b63b8137d532c953350a3a0e89ad417d892787bec53de description=kube-system/kube-apiserver-pause-285945/kube-apiserver id=c9a12835-c2d2-407a-b8a1-49105a4b954b name=/runtime.v1.RuntimeService/StartContainer sandboxID=b3e71e34af08c2e76bb659e1aba557a23c63221c69dce7c8d76c52dd82d6ae0e
	Oct 18 09:21:21 pause-285945 crio[2089]: time="2025-10-18T09:21:21.222931766Z" level=info msg="Started container" PID=2206 containerID=398d36b13bd728faa6adfbd6e855eb74c2571e3dfabd6e96b83dce6fa189e992 description=kube-system/kube-proxy-gqf7g/kube-proxy id=1d1d4a37-6f68-4758-a541-2693585895f7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3e102ec99b1bf20cdc1d2a9099402ac5377bfa4b4764b73ba27adffc93b337fc
	Oct 18 09:21:41 pause-285945 crio[2089]: time="2025-10-18T09:21:41.510722592Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:21:41 pause-285945 crio[2089]: time="2025-10-18T09:21:41.514400399Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:21:41 pause-285945 crio[2089]: time="2025-10-18T09:21:41.514571815Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:21:41 pause-285945 crio[2089]: time="2025-10-18T09:21:41.514643887Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:21:41 pause-285945 crio[2089]: time="2025-10-18T09:21:41.518743236Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:21:41 pause-285945 crio[2089]: time="2025-10-18T09:21:41.518780208Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:21:41 pause-285945 crio[2089]: time="2025-10-18T09:21:41.518802656Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:21:41 pause-285945 crio[2089]: time="2025-10-18T09:21:41.524908752Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:21:41 pause-285945 crio[2089]: time="2025-10-18T09:21:41.524942286Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:21:41 pause-285945 crio[2089]: time="2025-10-18T09:21:41.524965555Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:21:41 pause-285945 crio[2089]: time="2025-10-18T09:21:41.529655788Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:21:41 pause-285945 crio[2089]: time="2025-10-18T09:21:41.529691431Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:21:41 pause-285945 crio[2089]: time="2025-10-18T09:21:41.52971785Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:21:41 pause-285945 crio[2089]: time="2025-10-18T09:21:41.534340574Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:21:41 pause-285945 crio[2089]: time="2025-10-18T09:21:41.534503146Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	398d36b13bd72       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   27 seconds ago       Running             kube-proxy                1                   3e102ec99b1bf       kube-proxy-gqf7g                       kube-system
	37cc300384b47       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   27 seconds ago       Running             kube-apiserver            1                   b3e71e34af08c       kube-apiserver-pause-285945            kube-system
	4cb85d8cc7f14       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   27 seconds ago       Running             etcd                      1                   38d536a046c05       etcd-pause-285945                      kube-system
	33dfc732a6724       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   27 seconds ago       Running             kindnet-cni               1                   ab5309f61d46c       kindnet-5mqfk                          kube-system
	11700d96acff9       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   27 seconds ago       Running             kube-controller-manager   1                   2703c8ae03b3e       kube-controller-manager-pause-285945   kube-system
	cc61a4a23f064       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   27 seconds ago       Running             kube-scheduler            1                   6545c64a6779d       kube-scheduler-pause-285945            kube-system
	f35c8e372dcf6       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   27 seconds ago       Running             coredns                   1                   45c0cbebe9f23       coredns-66bc5c9577-ch44s               kube-system
	1a4e64037be19       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   40 seconds ago       Exited              coredns                   0                   45c0cbebe9f23       coredns-66bc5c9577-ch44s               kube-system
	21581abd06b44       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   3e102ec99b1bf       kube-proxy-gqf7g                       kube-system
	0b6f4c5d68f57       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   ab5309f61d46c       kindnet-5mqfk                          kube-system
	34cd6896ef08a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   2703c8ae03b3e       kube-controller-manager-pause-285945   kube-system
	0538a9e91af3c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   38d536a046c05       etcd-pause-285945                      kube-system
	0280c850869e7       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   6545c64a6779d       kube-scheduler-pause-285945            kube-system
	6a420152c7a36       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   b3e71e34af08c       kube-apiserver-pause-285945            kube-system
	
	
	==> coredns [1a4e64037be19372aa7d12c0611a808493277713d7879148571a9fd55986faa2] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54952 - 36421 "HINFO IN 8726939055143285893.3432354257386852064. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021793024s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f35c8e372dcf6c00ea78ebab8b256123203f31af973dfc78436329501af16b2d] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58797 - 64172 "HINFO IN 8098128589160393520.4773053509409951149. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010891884s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               pause-285945
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-285945
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820
	                    minikube.k8s.io/name=pause-285945
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_20_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:20:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-285945
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:21:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:21:11 +0000   Sat, 18 Oct 2025 09:20:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:21:11 +0000   Sat, 18 Oct 2025 09:20:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:21:11 +0000   Sat, 18 Oct 2025 09:20:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:21:11 +0000   Sat, 18 Oct 2025 09:21:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-285945
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                1d3b9ce8-5086-43aa-8774-123f7e957e3d
	  Boot ID:                    65523ab2-bf15-4ba4-9086-da57024e96a9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-ch44s                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     82s
	  kube-system                 etcd-pause-285945                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         88s
	  kube-system                 kindnet-5mqfk                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      83s
	  kube-system                 kube-apiserver-pause-285945             250m (12%)    0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-controller-manager-pause-285945    200m (10%)    0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-proxy-gqf7g                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-scheduler-pause-285945             100m (5%)     0 (0%)      0 (0%)           0 (0%)         88s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 81s                kube-proxy       
	  Normal   Starting                 17s                kube-proxy       
	  Normal   NodeHasSufficientMemory  98s (x8 over 98s)  kubelet          Node pause-285945 status is now: NodeHasSufficientMemory
	  Normal   Starting                 98s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 98s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    98s (x8 over 98s)  kubelet          Node pause-285945 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     98s (x8 over 98s)  kubelet          Node pause-285945 status is now: NodeHasSufficientPID
	  Normal   Starting                 88s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 88s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  88s                kubelet          Node pause-285945 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    88s                kubelet          Node pause-285945 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     88s                kubelet          Node pause-285945 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           83s                node-controller  Node pause-285945 event: Registered Node pause-285945 in Controller
	  Normal   NodeReady                41s                kubelet          Node pause-285945 status is now: NodeReady
	  Warning  ContainerGCFailed        28s                kubelet          [rpc error: code = Unavailable desc = error reading from server: read unix @->/var/run/crio/crio.sock: read: connection reset by peer, rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"]
	  Normal   RegisteredNode           15s                node-controller  Node pause-285945 event: Registered Node pause-285945 in Controller
	
	
	==> dmesg <==
	[  +3.790220] overlayfs: idmapped layers are currently not supported
	[Oct18 08:57] overlayfs: idmapped layers are currently not supported
	[Oct18 08:58] overlayfs: idmapped layers are currently not supported
	[Oct18 08:59] overlayfs: idmapped layers are currently not supported
	[  +2.831556] overlayfs: idmapped layers are currently not supported
	[ +37.438223] overlayfs: idmapped layers are currently not supported
	[Oct18 09:00] overlayfs: idmapped layers are currently not supported
	[Oct18 09:02] overlayfs: idmapped layers are currently not supported
	[Oct18 09:07] overlayfs: idmapped layers are currently not supported
	[ +35.005632] overlayfs: idmapped layers are currently not supported
	[Oct18 09:08] overlayfs: idmapped layers are currently not supported
	[Oct18 09:10] overlayfs: idmapped layers are currently not supported
	[Oct18 09:11] overlayfs: idmapped layers are currently not supported
	[Oct18 09:12] overlayfs: idmapped layers are currently not supported
	[Oct18 09:13] overlayfs: idmapped layers are currently not supported
	[  +9.741593] overlayfs: idmapped layers are currently not supported
	[Oct18 09:14] overlayfs: idmapped layers are currently not supported
	[ +29.155522] overlayfs: idmapped layers are currently not supported
	[Oct18 09:15] overlayfs: idmapped layers are currently not supported
	[ +11.661984] kauditd_printk_skb: 8 callbacks suppressed
	[ +12.494912] overlayfs: idmapped layers are currently not supported
	[Oct18 09:16] overlayfs: idmapped layers are currently not supported
	[Oct18 09:18] overlayfs: idmapped layers are currently not supported
	[Oct18 09:20] overlayfs: idmapped layers are currently not supported
	[  +1.396393] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [0538a9e91af3c2594858d5901fc22b2cc12438ad7f8b9e27aac99c9ed1080c70] <==
	{"level":"warn","ts":"2025-10-18T09:20:15.195971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:20:15.225645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:20:15.262929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:20:15.295797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:20:15.329894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:20:15.338740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:20:15.500856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52034","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T09:21:13.349075Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-18T09:21:13.349122Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-285945","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-10-18T09:21:13.349205Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T09:21:13.640325Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T09:21:13.641813Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T09:21:13.641870Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-10-18T09:21:13.641930Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-18T09:21:13.641950Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-18T09:21:13.641985Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T09:21:13.642049Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T09:21:13.642086Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-18T09:21:13.642159Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T09:21:13.642179Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T09:21:13.642189Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T09:21:13.645203Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-10-18T09:21:13.645277Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T09:21:13.645343Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-18T09:21:13.645388Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-285945","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> etcd [4cb85d8cc7f14d8d9d246217cd05a39d1173d04b8258cd991ce09ef6653e3e56] <==
	{"level":"warn","ts":"2025-10-18T09:21:27.740793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:21:27.784917Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:21:27.871355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:21:27.902723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:21:27.933501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:21:27.954920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:21:27.983646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:21:28.030423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:21:28.047989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:21:28.090376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:21:28.167984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:21:28.181599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:21:28.203120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:21:28.251658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:21:28.256621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:21:28.282101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:21:28.311590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:21:28.337849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:21:28.353721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:21:28.367147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:21:28.392580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:21:28.426298Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:21:28.477274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:21:28.478285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:21:28.567733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43474","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:21:48 up 11:04,  0 user,  load average: 3.75, 2.21, 1.89
	Linux pause-285945 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0b6f4c5d68f5776db514c3650ffb5153bc00f2908fa3e687038271e781876444] <==
	I1018 09:20:26.910282       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:20:26.910975       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 09:20:26.911102       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:20:26.911114       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:20:26.911126       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:20:27Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:20:27.119538       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:20:27.119607       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:20:27.119617       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:20:27.119739       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 09:20:57.120285       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 09:20:57.120298       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 09:20:57.120410       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 09:20:57.120547       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1018 09:20:58.719739       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:20:58.719999       1 metrics.go:72] Registering metrics
	I1018 09:20:58.720104       1 controller.go:711] "Syncing nftables rules"
	I1018 09:21:07.125914       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:21:07.125969       1 main.go:301] handling current node
	
	
	==> kindnet [33dfc732a6724d34a5a3468b14cb0196df6201d71c0ade7debd448b49d51e4a1] <==
	I1018 09:21:21.255065       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:21:21.265260       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 09:21:21.265408       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:21:21.265420       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:21:21.265434       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:21:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:21:21.515476       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:21:21.515519       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:21:21.515528       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:21:21.516283       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 09:21:30.032920       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 09:21:30.037547       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"networkpolicies\" in API group \"networking.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1018 09:21:30.037611       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 09:21:30.037658       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1018 09:21:31.515674       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:21:31.515709       1 metrics.go:72] Registering metrics
	I1018 09:21:31.515761       1 controller.go:711] "Syncing nftables rules"
	I1018 09:21:41.510303       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:21:41.510411       1 main.go:301] handling current node
	
	
	==> kube-apiserver [37cc300384b47fa2f70b63b8137d532c953350a3a0e89ad417d892787bec53de] <==
	I1018 09:21:30.041100       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I1018 09:21:30.091401       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 09:21:30.091526       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 09:21:30.130410       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 09:21:30.130525       1 policy_source.go:240] refreshing policies
	I1018 09:21:30.131347       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1018 09:21:30.137621       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:21:30.137803       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 09:21:30.149380       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 09:21:30.153164       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:21:30.182075       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 09:21:30.187740       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 09:21:30.201955       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1018 09:21:30.202069       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 09:21:30.203666       1 cache.go:39] Caches are synced for RemoteAvailability controller
	E1018 09:21:30.232334       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 09:21:30.241719       1 cache.go:39] Caches are synced for autoregister controller
	I1018 09:21:30.246784       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 09:21:30.263259       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 09:21:30.728116       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:21:31.949924       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 09:21:33.293591       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 09:21:33.334121       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 09:21:33.508841       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 09:21:33.607776       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [6a420152c7a36a00d7bff2513f0738078c0e95b7ddbdd097bb451e18a06c3cb4] <==
	W1018 09:21:13.383107       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 09:21:13.383166       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 09:21:13.383200       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 09:21:13.383253       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 09:21:13.383311       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 09:21:13.383365       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 09:21:13.383413       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 09:21:13.383447       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 09:21:13.383505       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 09:21:13.383585       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 09:21:13.383631       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 09:21:13.383668       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 09:21:13.383719       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 09:21:13.383770       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 09:21:13.382563       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 09:21:13.383940       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 09:21:13.383996       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 09:21:13.384058       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 09:21:13.382189       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 09:21:13.382409       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 09:21:13.383225       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 09:21:13.383018       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 09:21:13.384186       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 09:21:13.383640       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 09:21:13.383421       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [11700d96acff9466670c5cf8030c00176989ee1d0ac5ea3f9e31d40694f9219a] <==
	I1018 09:21:33.222306       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 09:21:33.226999       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 09:21:33.230059       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 09:21:33.230187       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 09:21:33.234055       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 09:21:33.235410       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 09:21:33.235496       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 09:21:33.236929       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 09:21:33.240379       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1018 09:21:33.241834       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 09:21:33.245732       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 09:21:33.245880       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 09:21:33.246640       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 09:21:33.246803       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 09:21:33.249110       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 09:21:33.246878       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 09:21:33.246860       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 09:21:33.256783       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-285945"
	I1018 09:21:33.258425       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 09:21:33.260064       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 09:21:33.262722       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 09:21:33.295608       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:21:33.295688       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 09:21:33.295698       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 09:21:33.374552       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [34cd6896ef08a5ecb37b5e68f208db427fb410a5f74cc5675a230e467bd084c2] <==
	I1018 09:20:25.330086       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 09:20:25.330106       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 09:20:25.333678       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 09:20:25.334064       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 09:20:25.334620       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 09:20:25.335078       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1018 09:20:25.335101       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 09:20:25.373399       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 09:20:25.373559       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 09:20:25.379291       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 09:20:25.379462       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:20:25.379493       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:20:25.379505       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 09:20:25.379554       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 09:20:25.379614       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-285945"
	I1018 09:20:25.379648       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1018 09:20:25.379672       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 09:20:25.384588       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 09:20:25.384642       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 09:20:25.481618       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 09:20:25.784724       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:20:25.784751       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 09:20:25.784758       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 09:20:25.842974       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:21:10.387766       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [21581abd06b4468f6862b749514951b88fa19a9799c250033bae2d5038769a0e] <==
	I1018 09:20:26.938288       1 server_linux.go:53] "Using iptables proxy"
	I1018 09:20:27.076634       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 09:20:27.213074       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:20:27.213113       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 09:20:27.213185       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:20:27.237234       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:20:27.237289       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:20:27.242515       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:20:27.242816       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:20:27.242838       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:20:27.243952       1 config.go:200] "Starting service config controller"
	I1018 09:20:27.244012       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:20:27.249661       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:20:27.250842       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:20:27.250932       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:20:27.250962       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:20:27.251869       1 config.go:309] "Starting node config controller"
	I1018 09:20:27.253870       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:20:27.253922       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 09:20:27.344442       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 09:20:27.351709       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 09:20:27.351714       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [398d36b13bd728faa6adfbd6e855eb74c2571e3dfabd6e96b83dce6fa189e992] <==
	I1018 09:21:24.135235       1 server_linux.go:53] "Using iptables proxy"
	I1018 09:21:25.549131       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1018 09:21:30.425936       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"pause-285945\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1018 09:21:31.549806       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:21:31.549967       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 09:21:31.550107       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:21:31.614327       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:21:31.614384       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:21:31.618474       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:21:31.618746       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:21:31.618771       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:21:31.620073       1 config.go:200] "Starting service config controller"
	I1018 09:21:31.620139       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:21:31.620233       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:21:31.620276       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:21:31.620314       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:21:31.620340       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:21:31.621332       1 config.go:309] "Starting node config controller"
	I1018 09:21:31.624887       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:21:31.624967       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 09:21:31.720425       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 09:21:31.729057       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 09:21:31.729096       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0280c850869e7c23b1ebc23ed077a035b68a823b0fcc56ddd5a24a101be5ea92] <==
	I1018 09:20:15.916556       1 serving.go:386] Generated self-signed cert in-memory
	W1018 09:20:18.707870       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 09:20:18.707998       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 09:20:18.708034       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 09:20:18.708064       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 09:20:18.764441       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 09:20:18.764839       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:20:18.767209       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 09:20:18.770903       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:20:18.775950       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:20:18.770934       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1018 09:20:18.801202       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1018 09:20:20.477268       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:21:13.359550       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1018 09:21:13.359675       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1018 09:21:13.359686       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1018 09:21:13.359704       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:21:13.359902       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1018 09:21:13.360019       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [cc61a4a23f0644850ab9a93b718afbf434bfd7dabb72b71c31d514a06ab41dd6] <==
	I1018 09:21:25.895598       1 serving.go:386] Generated self-signed cert in-memory
	I1018 09:21:30.333085       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 09:21:30.336781       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:21:30.392935       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 09:21:30.393120       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1018 09:21:30.393186       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1018 09:21:30.394287       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 09:21:30.403605       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:21:30.411916       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:21:30.412055       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 09:21:30.412102       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 09:21:30.496110       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1018 09:21:30.512855       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 09:21:30.513001       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:21:20 pause-285945 kubelet[1295]: E1018 09:21:20.886922    1295 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-285945\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="24737c116b177f9657f1e0a9d2efd2f3" pod="kube-system/kube-scheduler-pause-285945"
	Oct 18 09:21:20 pause-285945 kubelet[1295]: E1018 09:21:20.887241    1295 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-285945\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="1e9ce09b8a1f2a8436babb2348741acf" pod="kube-system/kube-controller-manager-pause-285945"
	Oct 18 09:21:20 pause-285945 kubelet[1295]: E1018 09:21:20.887553    1295 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-285945\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="4baf9c8602f49720683ddd1f83259199" pod="kube-system/etcd-pause-285945"
	Oct 18 09:21:20 pause-285945 kubelet[1295]: E1018 09:21:20.887928    1295 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-285945\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="5db594b15ce04b469bce45ecdb4e9905" pod="kube-system/kube-apiserver-pause-285945"
	Oct 18 09:21:29 pause-285945 kubelet[1295]: E1018 09:21:29.850523    1295 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-285945\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-285945' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 18 09:21:29 pause-285945 kubelet[1295]: E1018 09:21:29.851275    1295 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-285945\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-285945' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Oct 18 09:21:29 pause-285945 kubelet[1295]: E1018 09:21:29.851972    1295 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-285945\" is forbidden: User \"system:node:pause-285945\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-285945' and this object" podUID="4baf9c8602f49720683ddd1f83259199" pod="kube-system/etcd-pause-285945"
	Oct 18 09:21:29 pause-285945 kubelet[1295]: E1018 09:21:29.881857    1295 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-285945\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-285945' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Oct 18 09:21:29 pause-285945 kubelet[1295]: E1018 09:21:29.882555    1295 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-285945\" is forbidden: User \"system:node:pause-285945\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-285945' and this object" podUID="5db594b15ce04b469bce45ecdb4e9905" pod="kube-system/kube-apiserver-pause-285945"
	Oct 18 09:21:29 pause-285945 kubelet[1295]: E1018 09:21:29.903353    1295 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-5mqfk\" is forbidden: User \"system:node:pause-285945\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-285945' and this object" podUID="36f47490-1959-4b2b-ad86-324d964ab8c0" pod="kube-system/kindnet-5mqfk"
	Oct 18 09:21:29 pause-285945 kubelet[1295]: E1018 09:21:29.915421    1295 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-gqf7g\" is forbidden: User \"system:node:pause-285945\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-285945' and this object" podUID="ac408112-ba80-4c63-bfa7-1eb56aa91129" pod="kube-system/kube-proxy-gqf7g"
	Oct 18 09:21:29 pause-285945 kubelet[1295]: E1018 09:21:29.952896    1295 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-ch44s\" is forbidden: User \"system:node:pause-285945\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-285945' and this object" podUID="7bc5ae75-40ea-4059-a024-c849931795f7" pod="kube-system/coredns-66bc5c9577-ch44s"
	Oct 18 09:21:29 pause-285945 kubelet[1295]: E1018 09:21:29.992508    1295 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-285945\" is forbidden: User \"system:node:pause-285945\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-285945' and this object" podUID="24737c116b177f9657f1e0a9d2efd2f3" pod="kube-system/kube-scheduler-pause-285945"
	Oct 18 09:21:30 pause-285945 kubelet[1295]: E1018 09:21:30.046501    1295 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-285945\" is forbidden: User \"system:node:pause-285945\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-285945' and this object" podUID="1e9ce09b8a1f2a8436babb2348741acf" pod="kube-system/kube-controller-manager-pause-285945"
	Oct 18 09:21:30 pause-285945 kubelet[1295]: E1018 09:21:30.049770    1295 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-285945\" is forbidden: User \"system:node:pause-285945\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-285945' and this object" podUID="24737c116b177f9657f1e0a9d2efd2f3" pod="kube-system/kube-scheduler-pause-285945"
	Oct 18 09:21:30 pause-285945 kubelet[1295]: E1018 09:21:30.053739    1295 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-285945\" is forbidden: User \"system:node:pause-285945\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-285945' and this object" podUID="1e9ce09b8a1f2a8436babb2348741acf" pod="kube-system/kube-controller-manager-pause-285945"
	Oct 18 09:21:30 pause-285945 kubelet[1295]: E1018 09:21:30.068190    1295 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-285945\" is forbidden: User \"system:node:pause-285945\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-285945' and this object" podUID="4baf9c8602f49720683ddd1f83259199" pod="kube-system/etcd-pause-285945"
	Oct 18 09:21:30 pause-285945 kubelet[1295]: E1018 09:21:30.070833    1295 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-285945\" is forbidden: User \"system:node:pause-285945\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-285945' and this object" podUID="5db594b15ce04b469bce45ecdb4e9905" pod="kube-system/kube-apiserver-pause-285945"
	Oct 18 09:21:30 pause-285945 kubelet[1295]: E1018 09:21:30.073655    1295 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-5mqfk\" is forbidden: User \"system:node:pause-285945\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-285945' and this object" podUID="36f47490-1959-4b2b-ad86-324d964ab8c0" pod="kube-system/kindnet-5mqfk"
	Oct 18 09:21:30 pause-285945 kubelet[1295]: E1018 09:21:30.080242    1295 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-gqf7g\" is forbidden: User \"system:node:pause-285945\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-285945' and this object" podUID="ac408112-ba80-4c63-bfa7-1eb56aa91129" pod="kube-system/kube-proxy-gqf7g"
	Oct 18 09:21:30 pause-285945 kubelet[1295]: E1018 09:21:30.082824    1295 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-ch44s\" is forbidden: User \"system:node:pause-285945\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-285945' and this object" podUID="7bc5ae75-40ea-4059-a024-c849931795f7" pod="kube-system/coredns-66bc5c9577-ch44s"
	Oct 18 09:21:30 pause-285945 kubelet[1295]: W1018 09:21:30.779745    1295 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 18 09:21:45 pause-285945 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 09:21:45 pause-285945 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 09:21:45 pause-285945 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-285945 -n pause-285945
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-285945 -n pause-285945: exit status 2 (537.088426ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-285945 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-285945
helpers_test.go:243: (dbg) docker inspect pause-285945:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "42079d512de739891f5cc227cfc7d01b1a19121425e4fc9c2ae122ee4df2906f",
	        "Created": "2025-10-18T09:19:49.138839963Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1407514,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:19:49.228376657Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/42079d512de739891f5cc227cfc7d01b1a19121425e4fc9c2ae122ee4df2906f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/42079d512de739891f5cc227cfc7d01b1a19121425e4fc9c2ae122ee4df2906f/hostname",
	        "HostsPath": "/var/lib/docker/containers/42079d512de739891f5cc227cfc7d01b1a19121425e4fc9c2ae122ee4df2906f/hosts",
	        "LogPath": "/var/lib/docker/containers/42079d512de739891f5cc227cfc7d01b1a19121425e4fc9c2ae122ee4df2906f/42079d512de739891f5cc227cfc7d01b1a19121425e4fc9c2ae122ee4df2906f-json.log",
	        "Name": "/pause-285945",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-285945:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-285945",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "42079d512de739891f5cc227cfc7d01b1a19121425e4fc9c2ae122ee4df2906f",
	                "LowerDir": "/var/lib/docker/overlay2/c956b474b6a982fa19d88a03d4304919dca165ff1e06929307f434a31ddc26e5-init/diff:/var/lib/docker/overlay2/60519750c2737db0dfdb37adf468f6414129c2b5dcb7218ea59afbc63bfd537f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c956b474b6a982fa19d88a03d4304919dca165ff1e06929307f434a31ddc26e5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c956b474b6a982fa19d88a03d4304919dca165ff1e06929307f434a31ddc26e5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c956b474b6a982fa19d88a03d4304919dca165ff1e06929307f434a31ddc26e5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-285945",
	                "Source": "/var/lib/docker/volumes/pause-285945/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-285945",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-285945",
	                "name.minikube.sigs.k8s.io": "pause-285945",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "38b55645d5aba7bfea236b7ade020953c4838ab97b2b830159f77ea92fd4161d",
	            "SandboxKey": "/var/run/docker/netns/38b55645d5ab",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34796"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34797"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34800"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34798"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34799"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-285945": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:12:c2:12:b8:7d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "761f4f18d66806570520fa98264eb0932cfe4b4b047482b5c705ac244895e541",
	                    "EndpointID": "f19a6d658b1f8a5f7dea7adfe094eed7aec2b2c68264e054e886b4e27a28ee3c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-285945",
	                        "42079d512de7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-285945 -n pause-285945
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-285945 -n pause-285945: exit status 2 (338.873117ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-285945 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-285945 logs -n 25: (2.108836844s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                         ARGS                                                          │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-823174 --schedule 5m                                                                                │ scheduled-stop-823174       │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │                     │
	│ stop    │ -p scheduled-stop-823174 --schedule 15s                                                                               │ scheduled-stop-823174       │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │                     │
	│ stop    │ -p scheduled-stop-823174 --schedule 15s                                                                               │ scheduled-stop-823174       │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │                     │
	│ stop    │ -p scheduled-stop-823174 --schedule 15s                                                                               │ scheduled-stop-823174       │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │                     │
	│ stop    │ -p scheduled-stop-823174 --cancel-scheduled                                                                           │ scheduled-stop-823174       │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │ 18 Oct 25 09:18 UTC │
	│ stop    │ -p scheduled-stop-823174 --schedule 15s                                                                               │ scheduled-stop-823174       │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │                     │
	│ stop    │ -p scheduled-stop-823174 --schedule 15s                                                                               │ scheduled-stop-823174       │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │                     │
	│ stop    │ -p scheduled-stop-823174 --schedule 15s                                                                               │ scheduled-stop-823174       │ jenkins │ v1.37.0 │ 18 Oct 25 09:18 UTC │ 18 Oct 25 09:18 UTC │
	│ delete  │ -p scheduled-stop-823174                                                                                              │ scheduled-stop-823174       │ jenkins │ v1.37.0 │ 18 Oct 25 09:19 UTC │ 18 Oct 25 09:19 UTC │
	│ start   │ -p insufficient-storage-194172 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio      │ insufficient-storage-194172 │ jenkins │ v1.37.0 │ 18 Oct 25 09:19 UTC │                     │
	│ delete  │ -p insufficient-storage-194172                                                                                        │ insufficient-storage-194172 │ jenkins │ v1.37.0 │ 18 Oct 25 09:19 UTC │ 18 Oct 25 09:19 UTC │
	│ start   │ -p NoKubernetes-035766 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio         │ NoKubernetes-035766         │ jenkins │ v1.37.0 │ 18 Oct 25 09:19 UTC │                     │
	│ start   │ -p pause-285945 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio             │ pause-285945                │ jenkins │ v1.37.0 │ 18 Oct 25 09:19 UTC │ 18 Oct 25 09:21 UTC │
	│ start   │ -p NoKubernetes-035766 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                 │ NoKubernetes-035766         │ jenkins │ v1.37.0 │ 18 Oct 25 09:19 UTC │ 18 Oct 25 09:20 UTC │
	│ start   │ -p NoKubernetes-035766 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ NoKubernetes-035766         │ jenkins │ v1.37.0 │ 18 Oct 25 09:20 UTC │ 18 Oct 25 09:20 UTC │
	│ delete  │ -p NoKubernetes-035766                                                                                                │ NoKubernetes-035766         │ jenkins │ v1.37.0 │ 18 Oct 25 09:21 UTC │ 18 Oct 25 09:21 UTC │
	│ start   │ -p NoKubernetes-035766 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ NoKubernetes-035766         │ jenkins │ v1.37.0 │ 18 Oct 25 09:21 UTC │ 18 Oct 25 09:21 UTC │
	│ start   │ -p pause-285945 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                      │ pause-285945                │ jenkins │ v1.37.0 │ 18 Oct 25 09:21 UTC │ 18 Oct 25 09:21 UTC │
	│ ssh     │ -p NoKubernetes-035766 sudo systemctl is-active --quiet service kubelet                                               │ NoKubernetes-035766         │ jenkins │ v1.37.0 │ 18 Oct 25 09:21 UTC │                     │
	│ stop    │ -p NoKubernetes-035766                                                                                                │ NoKubernetes-035766         │ jenkins │ v1.37.0 │ 18 Oct 25 09:21 UTC │ 18 Oct 25 09:21 UTC │
	│ start   │ -p NoKubernetes-035766 --driver=docker  --container-runtime=crio                                                      │ NoKubernetes-035766         │ jenkins │ v1.37.0 │ 18 Oct 25 09:21 UTC │ 18 Oct 25 09:21 UTC │
	│ ssh     │ -p NoKubernetes-035766 sudo systemctl is-active --quiet service kubelet                                               │ NoKubernetes-035766         │ jenkins │ v1.37.0 │ 18 Oct 25 09:21 UTC │                     │
	│ delete  │ -p NoKubernetes-035766                                                                                                │ NoKubernetes-035766         │ jenkins │ v1.37.0 │ 18 Oct 25 09:21 UTC │ 18 Oct 25 09:21 UTC │
	│ start   │ -p missing-upgrade-995648 --memory=3072 --driver=docker  --container-runtime=crio                                     │ missing-upgrade-995648      │ jenkins │ v1.32.0 │ 18 Oct 25 09:21 UTC │                     │
	│ pause   │ -p pause-285945 --alsologtostderr -v=5                                                                                │ pause-285945                │ jenkins │ v1.37.0 │ 18 Oct 25 09:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:21:26
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:21:26.870281 1416707 out.go:296] Setting OutFile to fd 1 ...
	I1018 09:21:26.870404 1416707 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1018 09:21:26.870408 1416707 out.go:309] Setting ErrFile to fd 2...
	I1018 09:21:26.870413 1416707 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1018 09:21:26.870651 1416707 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 09:21:26.871004 1416707 out.go:303] Setting JSON to false
	I1018 09:21:26.871954 1416707 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":39834,"bootTime":1760739453,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1018 09:21:26.872012 1416707 start.go:138] virtualization:  
	I1018 09:21:26.876118 1416707 out.go:177] * [missing-upgrade-995648] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1018 09:21:26.879181 1416707 out.go:177]   - MINIKUBE_LOCATION=21767
	I1018 09:21:26.882170 1416707 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:21:26.879304 1416707 notify.go:220] Checking for updates...
	I1018 09:21:26.888160 1416707 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 09:21:26.891146 1416707 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-1274243/.minikube
	I1018 09:21:26.894127 1416707 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 09:21:26.897079 1416707 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:21:26.900592 1416707 config.go:182] Loaded profile config "pause-285945": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:21:26.900682 1416707 driver.go:378] Setting default libvirt URI to qemu:///system
	I1018 09:21:26.940861 1416707 docker.go:122] docker version: linux-28.1.1:Docker Engine - Community
	I1018 09:21:26.940955 1416707 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:21:27.006879 1416707 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/last_update_check: {Name:mk8630e82d195d5d83e403578065c22edb09c0cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:21:27.010723 1416707 out.go:177] * minikube 1.37.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.37.0
	I1018 09:21:27.013585 1416707 out.go:177] * To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	I1018 09:21:27.044824 1416707 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-18 09:21:27.032373049 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:21:27.044922 1416707 docker.go:295] overlay module found
	I1018 09:21:27.048056 1416707 out.go:177] * Using the docker driver based on user configuration
	I1018 09:21:27.050800 1416707 start.go:298] selected driver: docker
	I1018 09:21:27.050816 1416707 start.go:902] validating driver "docker" against <nil>
	I1018 09:21:27.050828 1416707 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:21:27.051455 1416707 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:21:27.156960 1416707 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-18 09:21:27.144276897 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:21:27.157102 1416707 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1018 09:21:27.157311 1416707 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1018 09:21:27.160366 1416707 out.go:177] * Using Docker driver with root privileges
	I1018 09:21:27.163336 1416707 cni.go:84] Creating CNI manager for ""
	I1018 09:21:27.163349 1416707 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:21:27.163359 1416707 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 09:21:27.163369 1416707 start_flags.go:323] config:
	{Name:missing-upgrade-995648 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:missing-upgrade-995648 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1018 09:21:27.166407 1416707 out.go:177] * Starting control plane node missing-upgrade-995648 in cluster missing-upgrade-995648
	I1018 09:21:27.169172 1416707 cache.go:121] Beginning downloading kic base image for docker with crio
	I1018 09:21:27.171917 1416707 out.go:177] * Pulling base image ...
	I1018 09:21:27.174595 1416707 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1018 09:21:27.174784 1416707 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1018 09:21:27.205956 1416707 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 to local cache
	I1018 09:21:27.206135 1416707 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local cache directory
	I1018 09:21:27.206167 1416707 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 to local cache
	I1018 09:21:27.229729 1416707 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4
	I1018 09:21:27.229744 1416707 cache.go:56] Caching tarball of preloaded images
	I1018 09:21:27.229892 1416707 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1018 09:21:27.233088 1416707 out.go:177] * Downloading Kubernetes v1.28.3 preload ...
	I1018 09:21:30.060788 1413875 node_ready.go:49] node "pause-285945" is "Ready"
	I1018 09:21:30.060819 1413875 node_ready.go:38] duration metric: took 6.205033798s for node "pause-285945" to be "Ready" ...
	I1018 09:21:30.060839 1413875 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:21:30.060923 1413875 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:21:30.161675 1413875 api_server.go:72] duration metric: took 6.627315711s to wait for apiserver process to appear ...
	I1018 09:21:30.161705 1413875 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:21:30.161746 1413875 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:21:30.193808 1413875 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:21:30.193897 1413875 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:21:30.666878 1413875 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:21:30.708065 1413875 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:21:30.708090 1413875 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:21:27.235934 1416707 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4 ...
	I1018 09:21:27.328237 1416707 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4?checksum=md5:3fdaeefa2c0cc3e046170ba83ccf0cac -> /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4
	I1018 09:21:31.803911 1416707 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4 ...
	I1018 09:21:31.804031 1416707 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4 ...
	I1018 09:21:31.162024 1413875 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:21:31.172030 1413875 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:21:31.172060 1413875 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:21:31.662693 1413875 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:21:31.671069 1413875 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 09:21:31.672149 1413875 api_server.go:141] control plane version: v1.34.1
	I1018 09:21:31.672173 1413875 api_server.go:131] duration metric: took 1.510460359s to wait for apiserver health ...
	I1018 09:21:31.672181 1413875 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:21:31.675916 1413875 system_pods.go:59] 7 kube-system pods found
	I1018 09:21:31.675953 1413875 system_pods.go:61] "coredns-66bc5c9577-ch44s" [7bc5ae75-40ea-4059-a024-c849931795f7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:21:31.675962 1413875 system_pods.go:61] "etcd-pause-285945" [6bb1df1b-6746-43b0-83ca-0c57aab670f1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:21:31.675968 1413875 system_pods.go:61] "kindnet-5mqfk" [36f47490-1959-4b2b-ad86-324d964ab8c0] Running
	I1018 09:21:31.675976 1413875 system_pods.go:61] "kube-apiserver-pause-285945" [48432868-955c-4f60-929c-8cd5681ffa70] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:21:31.675988 1413875 system_pods.go:61] "kube-controller-manager-pause-285945" [732686a8-1f86-4c8a-8164-5858e7530690] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:21:31.675998 1413875 system_pods.go:61] "kube-proxy-gqf7g" [ac408112-ba80-4c63-bfa7-1eb56aa91129] Running
	I1018 09:21:31.676005 1413875 system_pods.go:61] "kube-scheduler-pause-285945" [a51ddc5a-4318-4211-9360-62b161b6dc3d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:21:31.676015 1413875 system_pods.go:74] duration metric: took 3.82734ms to wait for pod list to return data ...
	I1018 09:21:31.676024 1413875 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:21:31.678474 1413875 default_sa.go:45] found service account: "default"
	I1018 09:21:31.678501 1413875 default_sa.go:55] duration metric: took 2.466615ms for default service account to be created ...
	I1018 09:21:31.678510 1413875 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 09:21:31.681215 1413875 system_pods.go:86] 7 kube-system pods found
	I1018 09:21:31.681249 1413875 system_pods.go:89] "coredns-66bc5c9577-ch44s" [7bc5ae75-40ea-4059-a024-c849931795f7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:21:31.681258 1413875 system_pods.go:89] "etcd-pause-285945" [6bb1df1b-6746-43b0-83ca-0c57aab670f1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:21:31.681264 1413875 system_pods.go:89] "kindnet-5mqfk" [36f47490-1959-4b2b-ad86-324d964ab8c0] Running
	I1018 09:21:31.681271 1413875 system_pods.go:89] "kube-apiserver-pause-285945" [48432868-955c-4f60-929c-8cd5681ffa70] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:21:31.681278 1413875 system_pods.go:89] "kube-controller-manager-pause-285945" [732686a8-1f86-4c8a-8164-5858e7530690] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:21:31.681305 1413875 system_pods.go:89] "kube-proxy-gqf7g" [ac408112-ba80-4c63-bfa7-1eb56aa91129] Running
	I1018 09:21:31.681313 1413875 system_pods.go:89] "kube-scheduler-pause-285945" [a51ddc5a-4318-4211-9360-62b161b6dc3d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:21:31.681324 1413875 system_pods.go:126] duration metric: took 2.807906ms to wait for k8s-apps to be running ...
	I1018 09:21:31.681332 1413875 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 09:21:31.681391 1413875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:21:31.695343 1413875 system_svc.go:56] duration metric: took 13.995667ms WaitForService to wait for kubelet
	I1018 09:21:31.695373 1413875 kubeadm.go:586] duration metric: took 8.161020589s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:21:31.695391 1413875 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:21:31.698406 1413875 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 09:21:31.698438 1413875 node_conditions.go:123] node cpu capacity is 2
	I1018 09:21:31.698450 1413875 node_conditions.go:105] duration metric: took 3.053593ms to run NodePressure ...
	I1018 09:21:31.698462 1413875 start.go:241] waiting for startup goroutines ...
	I1018 09:21:31.698469 1413875 start.go:246] waiting for cluster config update ...
	I1018 09:21:31.698478 1413875 start.go:255] writing updated cluster config ...
	I1018 09:21:31.698777 1413875 ssh_runner.go:195] Run: rm -f paused
	I1018 09:21:31.704761 1413875 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:21:31.705252 1413875 kapi.go:59] client config for pause-285945: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/pause-285945/client.crt", KeyFile:"/home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/pause-285945/client.key", CAFile:"/home/jenkins/minikube-integration/21767-1274243/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:
[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 09:21:31.709401 1413875 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ch44s" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 09:21:33.716543 1413875 pod_ready.go:104] pod "coredns-66bc5c9577-ch44s" is not "Ready", error: <nil>
	I1018 09:21:32.690992 1416707 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 as a tarball
	I1018 09:21:32.691004 1416707 cache.go:162] Loading gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 from local cache
	I1018 09:21:33.062294 1416707 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1018 09:21:33.067710 1416707 profile.go:148] Saving config to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/missing-upgrade-995648/config.json ...
	I1018 09:21:33.067756 1416707 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/missing-upgrade-995648/config.json: {Name:mkfc456014863ee15198df9815eb51dba8e2ce4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W1018 09:21:36.215372 1413875 pod_ready.go:104] pod "coredns-66bc5c9577-ch44s" is not "Ready", error: <nil>
	W1018 09:21:38.215646 1413875 pod_ready.go:104] pod "coredns-66bc5c9577-ch44s" is not "Ready", error: <nil>
	W1018 09:21:40.216339 1413875 pod_ready.go:104] pod "coredns-66bc5c9577-ch44s" is not "Ready", error: <nil>
	W1018 09:21:42.219381 1413875 pod_ready.go:104] pod "coredns-66bc5c9577-ch44s" is not "Ready", error: <nil>
	I1018 09:21:43.215738 1413875 pod_ready.go:94] pod "coredns-66bc5c9577-ch44s" is "Ready"
	I1018 09:21:43.215761 1413875 pod_ready.go:86] duration metric: took 11.506324204s for pod "coredns-66bc5c9577-ch44s" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:21:43.219124 1413875 pod_ready.go:83] waiting for pod "etcd-pause-285945" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:21:43.225083 1413875 pod_ready.go:94] pod "etcd-pause-285945" is "Ready"
	I1018 09:21:43.225113 1413875 pod_ready.go:86] duration metric: took 5.967187ms for pod "etcd-pause-285945" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:21:43.229054 1413875 pod_ready.go:83] waiting for pod "kube-apiserver-pause-285945" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:21:43.234075 1413875 pod_ready.go:94] pod "kube-apiserver-pause-285945" is "Ready"
	I1018 09:21:43.234153 1413875 pod_ready.go:86] duration metric: took 5.067979ms for pod "kube-apiserver-pause-285945" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:21:43.236607 1413875 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-285945" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:21:43.413679 1413875 pod_ready.go:94] pod "kube-controller-manager-pause-285945" is "Ready"
	I1018 09:21:43.413707 1413875 pod_ready.go:86] duration metric: took 177.07744ms for pod "kube-controller-manager-pause-285945" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:21:43.613321 1413875 pod_ready.go:83] waiting for pod "kube-proxy-gqf7g" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:21:44.014135 1413875 pod_ready.go:94] pod "kube-proxy-gqf7g" is "Ready"
	I1018 09:21:44.014165 1413875 pod_ready.go:86] duration metric: took 400.814866ms for pod "kube-proxy-gqf7g" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:21:44.213198 1413875 pod_ready.go:83] waiting for pod "kube-scheduler-pause-285945" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:21:44.613917 1413875 pod_ready.go:94] pod "kube-scheduler-pause-285945" is "Ready"
	I1018 09:21:44.613943 1413875 pod_ready.go:86] duration metric: took 400.721405ms for pod "kube-scheduler-pause-285945" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:21:44.613955 1413875 pod_ready.go:40] duration metric: took 12.909166206s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:21:44.696019 1413875 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 09:21:44.707995 1413875 out.go:179] * Done! kubectl is now configured to use "pause-285945" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 09:21:21 pause-285945 crio[2089]: time="2025-10-18T09:21:21.169694893Z" level=info msg="Starting container: 11700d96acff9466670c5cf8030c00176989ee1d0ac5ea3f9e31d40694f9219a" id=60c1e74f-6654-4adc-82d5-74a77dd4689c name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:21:21 pause-285945 crio[2089]: time="2025-10-18T09:21:21.184568567Z" level=info msg="Started container" PID=2191 containerID=33dfc732a6724d34a5a3468b14cb0196df6201d71c0ade7debd448b49d51e4a1 description=kube-system/kindnet-5mqfk/kindnet-cni id=a6016aa2-d3d6-4ec4-aff8-750b42d415fa name=/runtime.v1.RuntimeService/StartContainer sandboxID=ab5309f61d46ceac3bf534b71f824672eb9505e25e4c7b4bdcf3d915c173059b
	Oct 18 09:21:21 pause-285945 crio[2089]: time="2025-10-18T09:21:21.184799132Z" level=info msg="Started container" PID=2186 containerID=4cb85d8cc7f14d8d9d246217cd05a39d1173d04b8258cd991ce09ef6653e3e56 description=kube-system/etcd-pause-285945/etcd id=84c803f2-06e4-4c2b-a9bd-e953300511d8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=38d536a046c058042ceceb4d8a98b3f05abff4ee290e2d876e06116124fecbd1
	Oct 18 09:21:21 pause-285945 crio[2089]: time="2025-10-18T09:21:21.19372097Z" level=info msg="Created container 398d36b13bd728faa6adfbd6e855eb74c2571e3dfabd6e96b83dce6fa189e992: kube-system/kube-proxy-gqf7g/kube-proxy" id=0dcf6988-1c14-4b66-99be-03832fd725e2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:21:21 pause-285945 crio[2089]: time="2025-10-18T09:21:21.19401729Z" level=info msg="Created container 37cc300384b47fa2f70b63b8137d532c953350a3a0e89ad417d892787bec53de: kube-system/kube-apiserver-pause-285945/kube-apiserver" id=ed6acad8-c779-4775-ad33-0f19cb8e326e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:21:21 pause-285945 crio[2089]: time="2025-10-18T09:21:21.195325595Z" level=info msg="Starting container: 398d36b13bd728faa6adfbd6e855eb74c2571e3dfabd6e96b83dce6fa189e992" id=1d1d4a37-6f68-4758-a541-2693585895f7 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:21:21 pause-285945 crio[2089]: time="2025-10-18T09:21:21.202706815Z" level=info msg="Starting container: 37cc300384b47fa2f70b63b8137d532c953350a3a0e89ad417d892787bec53de" id=c9a12835-c2d2-407a-b8a1-49105a4b954b name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:21:21 pause-285945 crio[2089]: time="2025-10-18T09:21:21.207794551Z" level=info msg="Started container" PID=2172 containerID=11700d96acff9466670c5cf8030c00176989ee1d0ac5ea3f9e31d40694f9219a description=kube-system/kube-controller-manager-pause-285945/kube-controller-manager id=60c1e74f-6654-4adc-82d5-74a77dd4689c name=/runtime.v1.RuntimeService/StartContainer sandboxID=2703c8ae03b3edc1ec788a74a49f01555c2c2b096b91cdd4a5e73c8722b0e08c
	Oct 18 09:21:21 pause-285945 crio[2089]: time="2025-10-18T09:21:21.210073192Z" level=info msg="Started container" PID=2204 containerID=37cc300384b47fa2f70b63b8137d532c953350a3a0e89ad417d892787bec53de description=kube-system/kube-apiserver-pause-285945/kube-apiserver id=c9a12835-c2d2-407a-b8a1-49105a4b954b name=/runtime.v1.RuntimeService/StartContainer sandboxID=b3e71e34af08c2e76bb659e1aba557a23c63221c69dce7c8d76c52dd82d6ae0e
	Oct 18 09:21:21 pause-285945 crio[2089]: time="2025-10-18T09:21:21.222931766Z" level=info msg="Started container" PID=2206 containerID=398d36b13bd728faa6adfbd6e855eb74c2571e3dfabd6e96b83dce6fa189e992 description=kube-system/kube-proxy-gqf7g/kube-proxy id=1d1d4a37-6f68-4758-a541-2693585895f7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3e102ec99b1bf20cdc1d2a9099402ac5377bfa4b4764b73ba27adffc93b337fc
	Oct 18 09:21:41 pause-285945 crio[2089]: time="2025-10-18T09:21:41.510722592Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:21:41 pause-285945 crio[2089]: time="2025-10-18T09:21:41.514400399Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:21:41 pause-285945 crio[2089]: time="2025-10-18T09:21:41.514571815Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:21:41 pause-285945 crio[2089]: time="2025-10-18T09:21:41.514643887Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:21:41 pause-285945 crio[2089]: time="2025-10-18T09:21:41.518743236Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:21:41 pause-285945 crio[2089]: time="2025-10-18T09:21:41.518780208Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:21:41 pause-285945 crio[2089]: time="2025-10-18T09:21:41.518802656Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:21:41 pause-285945 crio[2089]: time="2025-10-18T09:21:41.524908752Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:21:41 pause-285945 crio[2089]: time="2025-10-18T09:21:41.524942286Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:21:41 pause-285945 crio[2089]: time="2025-10-18T09:21:41.524965555Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:21:41 pause-285945 crio[2089]: time="2025-10-18T09:21:41.529655788Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:21:41 pause-285945 crio[2089]: time="2025-10-18T09:21:41.529691431Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:21:41 pause-285945 crio[2089]: time="2025-10-18T09:21:41.52971785Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:21:41 pause-285945 crio[2089]: time="2025-10-18T09:21:41.534340574Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:21:41 pause-285945 crio[2089]: time="2025-10-18T09:21:41.534503146Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	398d36b13bd72       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   30 seconds ago       Running             kube-proxy                1                   3e102ec99b1bf       kube-proxy-gqf7g                       kube-system
	37cc300384b47       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   30 seconds ago       Running             kube-apiserver            1                   b3e71e34af08c       kube-apiserver-pause-285945            kube-system
	4cb85d8cc7f14       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   30 seconds ago       Running             etcd                      1                   38d536a046c05       etcd-pause-285945                      kube-system
	33dfc732a6724       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   30 seconds ago       Running             kindnet-cni               1                   ab5309f61d46c       kindnet-5mqfk                          kube-system
	11700d96acff9       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   30 seconds ago       Running             kube-controller-manager   1                   2703c8ae03b3e       kube-controller-manager-pause-285945   kube-system
	cc61a4a23f064       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   30 seconds ago       Running             kube-scheduler            1                   6545c64a6779d       kube-scheduler-pause-285945            kube-system
	f35c8e372dcf6       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   30 seconds ago       Running             coredns                   1                   45c0cbebe9f23       coredns-66bc5c9577-ch44s               kube-system
	1a4e64037be19       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   43 seconds ago       Exited              coredns                   0                   45c0cbebe9f23       coredns-66bc5c9577-ch44s               kube-system
	21581abd06b44       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   3e102ec99b1bf       kube-proxy-gqf7g                       kube-system
	0b6f4c5d68f57       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   ab5309f61d46c       kindnet-5mqfk                          kube-system
	34cd6896ef08a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   2703c8ae03b3e       kube-controller-manager-pause-285945   kube-system
	0538a9e91af3c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   38d536a046c05       etcd-pause-285945                      kube-system
	0280c850869e7       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   6545c64a6779d       kube-scheduler-pause-285945            kube-system
	6a420152c7a36       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   b3e71e34af08c       kube-apiserver-pause-285945            kube-system
	
	
	==> coredns [1a4e64037be19372aa7d12c0611a808493277713d7879148571a9fd55986faa2] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54952 - 36421 "HINFO IN 8726939055143285893.3432354257386852064. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021793024s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f35c8e372dcf6c00ea78ebab8b256123203f31af973dfc78436329501af16b2d] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58797 - 64172 "HINFO IN 8098128589160393520.4773053509409951149. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010891884s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               pause-285945
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-285945
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820
	                    minikube.k8s.io/name=pause-285945
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_20_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:20:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-285945
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:21:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:21:11 +0000   Sat, 18 Oct 2025 09:20:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:21:11 +0000   Sat, 18 Oct 2025 09:20:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:21:11 +0000   Sat, 18 Oct 2025 09:20:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:21:11 +0000   Sat, 18 Oct 2025 09:21:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-285945
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                1d3b9ce8-5086-43aa-8774-123f7e957e3d
	  Boot ID:                    65523ab2-bf15-4ba4-9086-da57024e96a9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-ch44s                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     85s
	  kube-system                 etcd-pause-285945                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         91s
	  kube-system                 kindnet-5mqfk                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      86s
	  kube-system                 kube-apiserver-pause-285945             250m (12%)    0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-controller-manager-pause-285945    200m (10%)    0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-proxy-gqf7g                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 kube-scheduler-pause-285945             100m (5%)     0 (0%)      0 (0%)           0 (0%)         91s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 84s                  kube-proxy       
	  Normal   Starting                 19s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  101s (x8 over 101s)  kubelet          Node pause-285945 status is now: NodeHasSufficientMemory
	  Normal   Starting                 101s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 101s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    101s (x8 over 101s)  kubelet          Node pause-285945 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     101s (x8 over 101s)  kubelet          Node pause-285945 status is now: NodeHasSufficientPID
	  Normal   Starting                 91s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 91s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  91s                  kubelet          Node pause-285945 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    91s                  kubelet          Node pause-285945 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     91s                  kubelet          Node pause-285945 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           86s                  node-controller  Node pause-285945 event: Registered Node pause-285945 in Controller
	  Normal   NodeReady                44s                  kubelet          Node pause-285945 status is now: NodeReady
	  Warning  ContainerGCFailed        31s                  kubelet          [rpc error: code = Unavailable desc = error reading from server: read unix @->/var/run/crio/crio.sock: read: connection reset by peer, rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"]
	  Normal   RegisteredNode           18s                  node-controller  Node pause-285945 event: Registered Node pause-285945 in Controller
	
	
	==> dmesg <==
	[  +3.790220] overlayfs: idmapped layers are currently not supported
	[Oct18 08:57] overlayfs: idmapped layers are currently not supported
	[Oct18 08:58] overlayfs: idmapped layers are currently not supported
	[Oct18 08:59] overlayfs: idmapped layers are currently not supported
	[  +2.831556] overlayfs: idmapped layers are currently not supported
	[ +37.438223] overlayfs: idmapped layers are currently not supported
	[Oct18 09:00] overlayfs: idmapped layers are currently not supported
	[Oct18 09:02] overlayfs: idmapped layers are currently not supported
	[Oct18 09:07] overlayfs: idmapped layers are currently not supported
	[ +35.005632] overlayfs: idmapped layers are currently not supported
	[Oct18 09:08] overlayfs: idmapped layers are currently not supported
	[Oct18 09:10] overlayfs: idmapped layers are currently not supported
	[Oct18 09:11] overlayfs: idmapped layers are currently not supported
	[Oct18 09:12] overlayfs: idmapped layers are currently not supported
	[Oct18 09:13] overlayfs: idmapped layers are currently not supported
	[  +9.741593] overlayfs: idmapped layers are currently not supported
	[Oct18 09:14] overlayfs: idmapped layers are currently not supported
	[ +29.155522] overlayfs: idmapped layers are currently not supported
	[Oct18 09:15] overlayfs: idmapped layers are currently not supported
	[ +11.661984] kauditd_printk_skb: 8 callbacks suppressed
	[ +12.494912] overlayfs: idmapped layers are currently not supported
	[Oct18 09:16] overlayfs: idmapped layers are currently not supported
	[Oct18 09:18] overlayfs: idmapped layers are currently not supported
	[Oct18 09:20] overlayfs: idmapped layers are currently not supported
	[  +1.396393] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [0538a9e91af3c2594858d5901fc22b2cc12438ad7f8b9e27aac99c9ed1080c70] <==
	{"level":"warn","ts":"2025-10-18T09:20:15.195971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:20:15.225645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:20:15.262929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:20:15.295797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:20:15.329894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:20:15.338740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:20:15.500856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52034","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T09:21:13.349075Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-18T09:21:13.349122Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-285945","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-10-18T09:21:13.349205Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T09:21:13.640325Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T09:21:13.641813Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T09:21:13.641870Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-10-18T09:21:13.641930Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-18T09:21:13.641950Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-18T09:21:13.641985Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T09:21:13.642049Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T09:21:13.642086Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-18T09:21:13.642159Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T09:21:13.642179Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T09:21:13.642189Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T09:21:13.645203Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-10-18T09:21:13.645277Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T09:21:13.645343Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-18T09:21:13.645388Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-285945","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> etcd [4cb85d8cc7f14d8d9d246217cd05a39d1173d04b8258cd991ce09ef6653e3e56] <==
	{"level":"warn","ts":"2025-10-18T09:21:27.740793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:21:27.784917Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:21:27.871355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:21:27.902723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:21:27.933501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:21:27.954920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:21:27.983646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:21:28.030423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:21:28.047989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:21:28.090376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:21:28.167984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:21:28.181599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:21:28.203120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:21:28.251658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:21:28.256621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:21:28.282101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:21:28.311590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:21:28.337849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:21:28.353721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:21:28.367147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:21:28.392580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:21:28.426298Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:21:28.477274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:21:28.478285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:21:28.567733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43474","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:21:51 up 11:04,  0 user,  load average: 3.53, 2.19, 1.89
	Linux pause-285945 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0b6f4c5d68f5776db514c3650ffb5153bc00f2908fa3e687038271e781876444] <==
	I1018 09:20:26.910282       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:20:26.910975       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 09:20:26.911102       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:20:26.911114       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:20:26.911126       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:20:27Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:20:27.119538       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:20:27.119607       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:20:27.119617       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:20:27.119739       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 09:20:57.120285       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 09:20:57.120298       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 09:20:57.120410       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 09:20:57.120547       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1018 09:20:58.719739       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:20:58.719999       1 metrics.go:72] Registering metrics
	I1018 09:20:58.720104       1 controller.go:711] "Syncing nftables rules"
	I1018 09:21:07.125914       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:21:07.125969       1 main.go:301] handling current node
	
	
	==> kindnet [33dfc732a6724d34a5a3468b14cb0196df6201d71c0ade7debd448b49d51e4a1] <==
	I1018 09:21:21.255065       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:21:21.265260       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 09:21:21.265408       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:21:21.265420       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:21:21.265434       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:21:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:21:21.515476       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:21:21.515519       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:21:21.515528       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:21:21.516283       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 09:21:30.032920       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 09:21:30.037547       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"networkpolicies\" in API group \"networking.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1018 09:21:30.037611       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 09:21:30.037658       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1018 09:21:31.515674       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:21:31.515709       1 metrics.go:72] Registering metrics
	I1018 09:21:31.515761       1 controller.go:711] "Syncing nftables rules"
	I1018 09:21:41.510303       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:21:41.510411       1 main.go:301] handling current node
	I1018 09:21:51.516741       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:21:51.516772       1 main.go:301] handling current node
	
	
	==> kube-apiserver [37cc300384b47fa2f70b63b8137d532c953350a3a0e89ad417d892787bec53de] <==
	I1018 09:21:30.041100       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I1018 09:21:30.091401       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 09:21:30.091526       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 09:21:30.130410       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 09:21:30.130525       1 policy_source.go:240] refreshing policies
	I1018 09:21:30.131347       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1018 09:21:30.137621       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:21:30.137803       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 09:21:30.149380       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 09:21:30.153164       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:21:30.182075       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 09:21:30.187740       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 09:21:30.201955       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1018 09:21:30.202069       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 09:21:30.203666       1 cache.go:39] Caches are synced for RemoteAvailability controller
	E1018 09:21:30.232334       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 09:21:30.241719       1 cache.go:39] Caches are synced for autoregister controller
	I1018 09:21:30.246784       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 09:21:30.263259       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 09:21:30.728116       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:21:31.949924       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 09:21:33.293591       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 09:21:33.334121       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 09:21:33.508841       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 09:21:33.607776       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [6a420152c7a36a00d7bff2513f0738078c0e95b7ddbdd097bb451e18a06c3cb4] <==
	W1018 09:21:13.383107       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 09:21:13.383166       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 09:21:13.383200       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 09:21:13.383253       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 09:21:13.383311       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 09:21:13.383365       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 09:21:13.383413       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 09:21:13.383447       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 09:21:13.383505       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 09:21:13.383585       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 09:21:13.383631       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 09:21:13.383668       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 09:21:13.383719       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 09:21:13.383770       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 09:21:13.382563       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 09:21:13.383940       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 09:21:13.383996       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 09:21:13.384058       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 09:21:13.382189       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 09:21:13.382409       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 09:21:13.383225       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 09:21:13.383018       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 09:21:13.384186       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 09:21:13.383640       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1018 09:21:13.383421       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [11700d96acff9466670c5cf8030c00176989ee1d0ac5ea3f9e31d40694f9219a] <==
	I1018 09:21:33.222306       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 09:21:33.226999       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 09:21:33.230059       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 09:21:33.230187       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 09:21:33.234055       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 09:21:33.235410       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 09:21:33.235496       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 09:21:33.236929       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 09:21:33.240379       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1018 09:21:33.241834       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 09:21:33.245732       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 09:21:33.245880       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 09:21:33.246640       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 09:21:33.246803       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 09:21:33.249110       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 09:21:33.246878       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 09:21:33.246860       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 09:21:33.256783       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-285945"
	I1018 09:21:33.258425       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 09:21:33.260064       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 09:21:33.262722       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 09:21:33.295608       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:21:33.295688       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 09:21:33.295698       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 09:21:33.374552       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [34cd6896ef08a5ecb37b5e68f208db427fb410a5f74cc5675a230e467bd084c2] <==
	I1018 09:20:25.330086       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 09:20:25.330106       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 09:20:25.333678       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 09:20:25.334064       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 09:20:25.334620       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 09:20:25.335078       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1018 09:20:25.335101       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 09:20:25.373399       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 09:20:25.373559       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 09:20:25.379291       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 09:20:25.379462       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:20:25.379493       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:20:25.379505       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 09:20:25.379554       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 09:20:25.379614       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-285945"
	I1018 09:20:25.379648       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1018 09:20:25.379672       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 09:20:25.384588       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 09:20:25.384642       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 09:20:25.481618       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 09:20:25.784724       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:20:25.784751       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 09:20:25.784758       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 09:20:25.842974       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:21:10.387766       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [21581abd06b4468f6862b749514951b88fa19a9799c250033bae2d5038769a0e] <==
	I1018 09:20:26.938288       1 server_linux.go:53] "Using iptables proxy"
	I1018 09:20:27.076634       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 09:20:27.213074       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:20:27.213113       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 09:20:27.213185       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:20:27.237234       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:20:27.237289       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:20:27.242515       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:20:27.242816       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:20:27.242838       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:20:27.243952       1 config.go:200] "Starting service config controller"
	I1018 09:20:27.244012       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:20:27.249661       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:20:27.250842       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:20:27.250932       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:20:27.250962       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:20:27.251869       1 config.go:309] "Starting node config controller"
	I1018 09:20:27.253870       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:20:27.253922       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 09:20:27.344442       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 09:20:27.351709       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 09:20:27.351714       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [398d36b13bd728faa6adfbd6e855eb74c2571e3dfabd6e96b83dce6fa189e992] <==
	I1018 09:21:24.135235       1 server_linux.go:53] "Using iptables proxy"
	I1018 09:21:25.549131       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1018 09:21:30.425936       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"pause-285945\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1018 09:21:31.549806       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:21:31.549967       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 09:21:31.550107       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:21:31.614327       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:21:31.614384       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:21:31.618474       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:21:31.618746       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:21:31.618771       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:21:31.620073       1 config.go:200] "Starting service config controller"
	I1018 09:21:31.620139       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:21:31.620233       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:21:31.620276       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:21:31.620314       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:21:31.620340       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:21:31.621332       1 config.go:309] "Starting node config controller"
	I1018 09:21:31.624887       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:21:31.624967       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 09:21:31.720425       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 09:21:31.729057       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 09:21:31.729096       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0280c850869e7c23b1ebc23ed077a035b68a823b0fcc56ddd5a24a101be5ea92] <==
	I1018 09:20:15.916556       1 serving.go:386] Generated self-signed cert in-memory
	W1018 09:20:18.707870       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 09:20:18.707998       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 09:20:18.708034       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 09:20:18.708064       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 09:20:18.764441       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 09:20:18.764839       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:20:18.767209       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 09:20:18.770903       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:20:18.775950       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:20:18.770934       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1018 09:20:18.801202       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1018 09:20:20.477268       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:21:13.359550       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1018 09:21:13.359675       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1018 09:21:13.359686       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1018 09:21:13.359704       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:21:13.359902       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1018 09:21:13.360019       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [cc61a4a23f0644850ab9a93b718afbf434bfd7dabb72b71c31d514a06ab41dd6] <==
	I1018 09:21:25.895598       1 serving.go:386] Generated self-signed cert in-memory
	I1018 09:21:30.333085       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 09:21:30.336781       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:21:30.392935       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 09:21:30.393120       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1018 09:21:30.393186       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1018 09:21:30.394287       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 09:21:30.403605       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:21:30.411916       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:21:30.412055       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 09:21:30.412102       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 09:21:30.496110       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1018 09:21:30.512855       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 09:21:30.513001       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:21:20 pause-285945 kubelet[1295]: E1018 09:21:20.886922    1295 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-285945\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="24737c116b177f9657f1e0a9d2efd2f3" pod="kube-system/kube-scheduler-pause-285945"
	Oct 18 09:21:20 pause-285945 kubelet[1295]: E1018 09:21:20.887241    1295 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-285945\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="1e9ce09b8a1f2a8436babb2348741acf" pod="kube-system/kube-controller-manager-pause-285945"
	Oct 18 09:21:20 pause-285945 kubelet[1295]: E1018 09:21:20.887553    1295 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-285945\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="4baf9c8602f49720683ddd1f83259199" pod="kube-system/etcd-pause-285945"
	Oct 18 09:21:20 pause-285945 kubelet[1295]: E1018 09:21:20.887928    1295 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-285945\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="5db594b15ce04b469bce45ecdb4e9905" pod="kube-system/kube-apiserver-pause-285945"
	Oct 18 09:21:29 pause-285945 kubelet[1295]: E1018 09:21:29.850523    1295 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-285945\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-285945' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 18 09:21:29 pause-285945 kubelet[1295]: E1018 09:21:29.851275    1295 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-285945\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-285945' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Oct 18 09:21:29 pause-285945 kubelet[1295]: E1018 09:21:29.851972    1295 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-285945\" is forbidden: User \"system:node:pause-285945\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-285945' and this object" podUID="4baf9c8602f49720683ddd1f83259199" pod="kube-system/etcd-pause-285945"
	Oct 18 09:21:29 pause-285945 kubelet[1295]: E1018 09:21:29.881857    1295 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-285945\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-285945' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Oct 18 09:21:29 pause-285945 kubelet[1295]: E1018 09:21:29.882555    1295 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-285945\" is forbidden: User \"system:node:pause-285945\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-285945' and this object" podUID="5db594b15ce04b469bce45ecdb4e9905" pod="kube-system/kube-apiserver-pause-285945"
	Oct 18 09:21:29 pause-285945 kubelet[1295]: E1018 09:21:29.903353    1295 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-5mqfk\" is forbidden: User \"system:node:pause-285945\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-285945' and this object" podUID="36f47490-1959-4b2b-ad86-324d964ab8c0" pod="kube-system/kindnet-5mqfk"
	Oct 18 09:21:29 pause-285945 kubelet[1295]: E1018 09:21:29.915421    1295 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-gqf7g\" is forbidden: User \"system:node:pause-285945\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-285945' and this object" podUID="ac408112-ba80-4c63-bfa7-1eb56aa91129" pod="kube-system/kube-proxy-gqf7g"
	Oct 18 09:21:29 pause-285945 kubelet[1295]: E1018 09:21:29.952896    1295 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-ch44s\" is forbidden: User \"system:node:pause-285945\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-285945' and this object" podUID="7bc5ae75-40ea-4059-a024-c849931795f7" pod="kube-system/coredns-66bc5c9577-ch44s"
	Oct 18 09:21:29 pause-285945 kubelet[1295]: E1018 09:21:29.992508    1295 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-285945\" is forbidden: User \"system:node:pause-285945\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-285945' and this object" podUID="24737c116b177f9657f1e0a9d2efd2f3" pod="kube-system/kube-scheduler-pause-285945"
	Oct 18 09:21:30 pause-285945 kubelet[1295]: E1018 09:21:30.046501    1295 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-285945\" is forbidden: User \"system:node:pause-285945\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-285945' and this object" podUID="1e9ce09b8a1f2a8436babb2348741acf" pod="kube-system/kube-controller-manager-pause-285945"
	Oct 18 09:21:30 pause-285945 kubelet[1295]: E1018 09:21:30.049770    1295 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-285945\" is forbidden: User \"system:node:pause-285945\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-285945' and this object" podUID="24737c116b177f9657f1e0a9d2efd2f3" pod="kube-system/kube-scheduler-pause-285945"
	Oct 18 09:21:30 pause-285945 kubelet[1295]: E1018 09:21:30.053739    1295 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-285945\" is forbidden: User \"system:node:pause-285945\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-285945' and this object" podUID="1e9ce09b8a1f2a8436babb2348741acf" pod="kube-system/kube-controller-manager-pause-285945"
	Oct 18 09:21:30 pause-285945 kubelet[1295]: E1018 09:21:30.068190    1295 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-285945\" is forbidden: User \"system:node:pause-285945\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-285945' and this object" podUID="4baf9c8602f49720683ddd1f83259199" pod="kube-system/etcd-pause-285945"
	Oct 18 09:21:30 pause-285945 kubelet[1295]: E1018 09:21:30.070833    1295 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-285945\" is forbidden: User \"system:node:pause-285945\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-285945' and this object" podUID="5db594b15ce04b469bce45ecdb4e9905" pod="kube-system/kube-apiserver-pause-285945"
	Oct 18 09:21:30 pause-285945 kubelet[1295]: E1018 09:21:30.073655    1295 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-5mqfk\" is forbidden: User \"system:node:pause-285945\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-285945' and this object" podUID="36f47490-1959-4b2b-ad86-324d964ab8c0" pod="kube-system/kindnet-5mqfk"
	Oct 18 09:21:30 pause-285945 kubelet[1295]: E1018 09:21:30.080242    1295 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-gqf7g\" is forbidden: User \"system:node:pause-285945\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-285945' and this object" podUID="ac408112-ba80-4c63-bfa7-1eb56aa91129" pod="kube-system/kube-proxy-gqf7g"
	Oct 18 09:21:30 pause-285945 kubelet[1295]: E1018 09:21:30.082824    1295 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-ch44s\" is forbidden: User \"system:node:pause-285945\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-285945' and this object" podUID="7bc5ae75-40ea-4059-a024-c849931795f7" pod="kube-system/coredns-66bc5c9577-ch44s"
	Oct 18 09:21:30 pause-285945 kubelet[1295]: W1018 09:21:30.779745    1295 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 18 09:21:45 pause-285945 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 09:21:45 pause-285945 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 09:21:45 pause-285945 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-285945 -n pause-285945
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-285945 -n pause-285945: exit status 2 (460.136024ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-285945 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (8.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-136598 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-136598 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (262.849796ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:29:39Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-136598 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-136598 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-136598 describe deploy/metrics-server -n kube-system: exit status 1 (84.649206ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-136598 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-136598
helpers_test.go:243: (dbg) docker inspect old-k8s-version-136598:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "396852f7b3ff57804dec4cde35210d902d03238e0cea7fff525cafcc0f4703cf",
	        "Created": "2025-10-18T09:28:36.683322169Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1453673,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:28:36.74751396Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/396852f7b3ff57804dec4cde35210d902d03238e0cea7fff525cafcc0f4703cf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/396852f7b3ff57804dec4cde35210d902d03238e0cea7fff525cafcc0f4703cf/hostname",
	        "HostsPath": "/var/lib/docker/containers/396852f7b3ff57804dec4cde35210d902d03238e0cea7fff525cafcc0f4703cf/hosts",
	        "LogPath": "/var/lib/docker/containers/396852f7b3ff57804dec4cde35210d902d03238e0cea7fff525cafcc0f4703cf/396852f7b3ff57804dec4cde35210d902d03238e0cea7fff525cafcc0f4703cf-json.log",
	        "Name": "/old-k8s-version-136598",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-136598:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-136598",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "396852f7b3ff57804dec4cde35210d902d03238e0cea7fff525cafcc0f4703cf",
	                "LowerDir": "/var/lib/docker/overlay2/84864c1490f58ee53ad331d94af688b88ad3a7e940f19ffed23dd53ccaba1716-init/diff:/var/lib/docker/overlay2/60519750c2737db0dfdb37adf468f6414129c2b5dcb7218ea59afbc63bfd537f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/84864c1490f58ee53ad331d94af688b88ad3a7e940f19ffed23dd53ccaba1716/merged",
	                "UpperDir": "/var/lib/docker/overlay2/84864c1490f58ee53ad331d94af688b88ad3a7e940f19ffed23dd53ccaba1716/diff",
	                "WorkDir": "/var/lib/docker/overlay2/84864c1490f58ee53ad331d94af688b88ad3a7e940f19ffed23dd53ccaba1716/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-136598",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-136598/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-136598",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-136598",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-136598",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d8db57763643aae9b44da39a1cfe586517011f1e9bbc41472abc2ad12fc01439",
	            "SandboxKey": "/var/run/docker/netns/d8db57763643",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34871"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34872"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34875"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34873"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34874"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-136598": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "82:ba:bf:b0:4b:37",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0ac75cd444c8d84d3c10418ebf74369e7543fa159203a9e520092b626fcf4011",
	                    "EndpointID": "e0de3ad6584c6f22668474e0ef9a7c4bbe220f5bcc77a9384481e3fbb6822077",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-136598",
	                        "396852f7b3ff"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-136598 -n old-k8s-version-136598
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-136598 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-136598 logs -n 25: (1.198629897s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-275703 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-275703             │ jenkins │ v1.37.0 │ 18 Oct 25 09:26 UTC │                     │
	│ ssh     │ -p cilium-275703 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-275703             │ jenkins │ v1.37.0 │ 18 Oct 25 09:26 UTC │                     │
	│ ssh     │ -p cilium-275703 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-275703             │ jenkins │ v1.37.0 │ 18 Oct 25 09:26 UTC │                     │
	│ ssh     │ -p cilium-275703 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-275703             │ jenkins │ v1.37.0 │ 18 Oct 25 09:26 UTC │                     │
	│ ssh     │ -p cilium-275703 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-275703             │ jenkins │ v1.37.0 │ 18 Oct 25 09:26 UTC │                     │
	│ ssh     │ -p cilium-275703 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-275703             │ jenkins │ v1.37.0 │ 18 Oct 25 09:26 UTC │                     │
	│ ssh     │ -p cilium-275703 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-275703             │ jenkins │ v1.37.0 │ 18 Oct 25 09:26 UTC │                     │
	│ ssh     │ -p cilium-275703 sudo containerd config dump                                                                                                                                                                                                  │ cilium-275703             │ jenkins │ v1.37.0 │ 18 Oct 25 09:26 UTC │                     │
	│ ssh     │ -p cilium-275703 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-275703             │ jenkins │ v1.37.0 │ 18 Oct 25 09:26 UTC │                     │
	│ ssh     │ -p cilium-275703 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-275703             │ jenkins │ v1.37.0 │ 18 Oct 25 09:26 UTC │                     │
	│ ssh     │ -p cilium-275703 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-275703             │ jenkins │ v1.37.0 │ 18 Oct 25 09:26 UTC │                     │
	│ ssh     │ -p cilium-275703 sudo crio config                                                                                                                                                                                                             │ cilium-275703             │ jenkins │ v1.37.0 │ 18 Oct 25 09:26 UTC │                     │
	│ delete  │ -p cilium-275703                                                                                                                                                                                                                              │ cilium-275703             │ jenkins │ v1.37.0 │ 18 Oct 25 09:26 UTC │ 18 Oct 25 09:26 UTC │
	│ start   │ -p force-systemd-env-406177 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-406177  │ jenkins │ v1.37.0 │ 18 Oct 25 09:26 UTC │ 18 Oct 25 09:26 UTC │
	│ delete  │ -p force-systemd-env-406177                                                                                                                                                                                                                   │ force-systemd-env-406177  │ jenkins │ v1.37.0 │ 18 Oct 25 09:26 UTC │ 18 Oct 25 09:26 UTC │
	│ start   │ -p cert-expiration-854768 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-854768    │ jenkins │ v1.37.0 │ 18 Oct 25 09:26 UTC │ 18 Oct 25 09:27 UTC │
	│ start   │ -p kubernetes-upgrade-757858 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-757858 │ jenkins │ v1.37.0 │ 18 Oct 25 09:27 UTC │                     │
	│ start   │ -p kubernetes-upgrade-757858 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-757858 │ jenkins │ v1.37.0 │ 18 Oct 25 09:27 UTC │ 18 Oct 25 09:27 UTC │
	│ delete  │ -p kubernetes-upgrade-757858                                                                                                                                                                                                                  │ kubernetes-upgrade-757858 │ jenkins │ v1.37.0 │ 18 Oct 25 09:27 UTC │ 18 Oct 25 09:27 UTC │
	│ start   │ -p cert-options-783705 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-783705       │ jenkins │ v1.37.0 │ 18 Oct 25 09:27 UTC │ 18 Oct 25 09:28 UTC │
	│ ssh     │ cert-options-783705 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-783705       │ jenkins │ v1.37.0 │ 18 Oct 25 09:28 UTC │ 18 Oct 25 09:28 UTC │
	│ ssh     │ -p cert-options-783705 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-783705       │ jenkins │ v1.37.0 │ 18 Oct 25 09:28 UTC │ 18 Oct 25 09:28 UTC │
	│ delete  │ -p cert-options-783705                                                                                                                                                                                                                        │ cert-options-783705       │ jenkins │ v1.37.0 │ 18 Oct 25 09:28 UTC │ 18 Oct 25 09:28 UTC │
	│ start   │ -p old-k8s-version-136598 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-136598    │ jenkins │ v1.37.0 │ 18 Oct 25 09:28 UTC │ 18 Oct 25 09:29 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-136598 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-136598    │ jenkins │ v1.37.0 │ 18 Oct 25 09:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:28:30
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:28:30.662022 1453282 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:28:30.662158 1453282 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:28:30.662170 1453282 out.go:374] Setting ErrFile to fd 2...
	I1018 09:28:30.662176 1453282 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:28:30.662452 1453282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 09:28:30.662877 1453282 out.go:368] Setting JSON to false
	I1018 09:28:30.663797 1453282 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":40258,"bootTime":1760739453,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1018 09:28:30.663941 1453282 start.go:141] virtualization:  
	I1018 09:28:30.667650 1453282 out.go:179] * [old-k8s-version-136598] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 09:28:30.672246 1453282 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 09:28:30.672287 1453282 notify.go:220] Checking for updates...
	I1018 09:28:30.675705 1453282 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:28:30.679099 1453282 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 09:28:30.682318 1453282 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-1274243/.minikube
	I1018 09:28:30.685522 1453282 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 09:28:30.688716 1453282 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:28:30.692300 1453282 config.go:182] Loaded profile config "cert-expiration-854768": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:28:30.692428 1453282 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:28:30.716599 1453282 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 09:28:30.716713 1453282 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:28:30.772661 1453282 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 09:28:30.763062027 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:28:30.772768 1453282 docker.go:318] overlay module found
	I1018 09:28:30.776130 1453282 out.go:179] * Using the docker driver based on user configuration
	I1018 09:28:30.779235 1453282 start.go:305] selected driver: docker
	I1018 09:28:30.779265 1453282 start.go:925] validating driver "docker" against <nil>
	I1018 09:28:30.779301 1453282 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:28:30.780222 1453282 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:28:30.837817 1453282 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 09:28:30.828937333 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:28:30.837969 1453282 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 09:28:30.838197 1453282 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:28:30.841276 1453282 out.go:179] * Using Docker driver with root privileges
	I1018 09:28:30.844194 1453282 cni.go:84] Creating CNI manager for ""
	I1018 09:28:30.844271 1453282 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:28:30.844287 1453282 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 09:28:30.844369 1453282 start.go:349] cluster config:
	{Name:old-k8s-version-136598 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-136598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:28:30.849266 1453282 out.go:179] * Starting "old-k8s-version-136598" primary control-plane node in "old-k8s-version-136598" cluster
	I1018 09:28:30.852227 1453282 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:28:30.855254 1453282 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:28:30.858190 1453282 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 09:28:30.858241 1453282 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1018 09:28:30.858254 1453282 cache.go:58] Caching tarball of preloaded images
	I1018 09:28:30.858286 1453282 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:28:30.858360 1453282 preload.go:233] Found /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 09:28:30.858370 1453282 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1018 09:28:30.858472 1453282 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/old-k8s-version-136598/config.json ...
	I1018 09:28:30.858488 1453282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/old-k8s-version-136598/config.json: {Name:mkf2bea5128054694f5dff8ffcfd0c513a4260b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:28:30.876846 1453282 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:28:30.876868 1453282 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:28:30.876880 1453282 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:28:30.876901 1453282 start.go:360] acquireMachinesLock for old-k8s-version-136598: {Name:mkc3336396653163a4ec874eaa8156baab6b30f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:28:30.876995 1453282 start.go:364] duration metric: took 79.489µs to acquireMachinesLock for "old-k8s-version-136598"
	I1018 09:28:30.877025 1453282 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-136598 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-136598 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:28:30.877099 1453282 start.go:125] createHost starting for "" (driver="docker")
	I1018 09:28:30.880509 1453282 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 09:28:30.880719 1453282 start.go:159] libmachine.API.Create for "old-k8s-version-136598" (driver="docker")
	I1018 09:28:30.880758 1453282 client.go:168] LocalClient.Create starting
	I1018 09:28:30.880820 1453282 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem
	I1018 09:28:30.880860 1453282 main.go:141] libmachine: Decoding PEM data...
	I1018 09:28:30.880884 1453282 main.go:141] libmachine: Parsing certificate...
	I1018 09:28:30.880941 1453282 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem
	I1018 09:28:30.880967 1453282 main.go:141] libmachine: Decoding PEM data...
	I1018 09:28:30.880980 1453282 main.go:141] libmachine: Parsing certificate...
	I1018 09:28:30.881323 1453282 cli_runner.go:164] Run: docker network inspect old-k8s-version-136598 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 09:28:30.898099 1453282 cli_runner.go:211] docker network inspect old-k8s-version-136598 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 09:28:30.898186 1453282 network_create.go:284] running [docker network inspect old-k8s-version-136598] to gather additional debugging logs...
	I1018 09:28:30.898200 1453282 cli_runner.go:164] Run: docker network inspect old-k8s-version-136598
	W1018 09:28:30.915403 1453282 cli_runner.go:211] docker network inspect old-k8s-version-136598 returned with exit code 1
	I1018 09:28:30.915444 1453282 network_create.go:287] error running [docker network inspect old-k8s-version-136598]: docker network inspect old-k8s-version-136598: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-136598 not found
	I1018 09:28:30.915456 1453282 network_create.go:289] output of [docker network inspect old-k8s-version-136598]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-136598 not found
	
	** /stderr **
	I1018 09:28:30.915569 1453282 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:28:30.931505 1453282 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-521f8f572997 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3a:7e:e5:c0:67:29} reservation:<nil>}
	I1018 09:28:30.931932 1453282 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0b81e76c4e4f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ae:bf:e8:f1:22:c8} reservation:<nil>}
	I1018 09:28:30.932357 1453282 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-41e3e621447e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:62:fc:17:ff:cd:8c} reservation:<nil>}
	I1018 09:28:30.932786 1453282 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019b5dd0}
	I1018 09:28:30.932804 1453282 network_create.go:124] attempt to create docker network old-k8s-version-136598 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1018 09:28:30.932879 1453282 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-136598 old-k8s-version-136598
	I1018 09:28:30.989235 1453282 network_create.go:108] docker network old-k8s-version-136598 192.168.76.0/24 created
	I1018 09:28:30.989269 1453282 kic.go:121] calculated static IP "192.168.76.2" for the "old-k8s-version-136598" container
	I1018 09:28:30.989340 1453282 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 09:28:31.005767 1453282 cli_runner.go:164] Run: docker volume create old-k8s-version-136598 --label name.minikube.sigs.k8s.io=old-k8s-version-136598 --label created_by.minikube.sigs.k8s.io=true
	I1018 09:28:31.024865 1453282 oci.go:103] Successfully created a docker volume old-k8s-version-136598
	I1018 09:28:31.024959 1453282 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-136598-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-136598 --entrypoint /usr/bin/test -v old-k8s-version-136598:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 09:28:31.570059 1453282 oci.go:107] Successfully prepared a docker volume old-k8s-version-136598
	I1018 09:28:31.570101 1453282 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 09:28:31.570119 1453282 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 09:28:31.570193 1453282 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-136598:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 09:28:36.617698 1453282 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-136598:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (5.047460344s)
	I1018 09:28:36.617727 1453282 kic.go:203] duration metric: took 5.047604012s to extract preloaded images to volume ...
	W1018 09:28:36.617868 1453282 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 09:28:36.618000 1453282 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 09:28:36.669020 1453282 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-136598 --name old-k8s-version-136598 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-136598 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-136598 --network old-k8s-version-136598 --ip 192.168.76.2 --volume old-k8s-version-136598:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 09:28:36.954363 1453282 cli_runner.go:164] Run: docker container inspect old-k8s-version-136598 --format={{.State.Running}}
	I1018 09:28:36.980259 1453282 cli_runner.go:164] Run: docker container inspect old-k8s-version-136598 --format={{.State.Status}}
	I1018 09:28:37.007995 1453282 cli_runner.go:164] Run: docker exec old-k8s-version-136598 stat /var/lib/dpkg/alternatives/iptables
	I1018 09:28:37.076673 1453282 oci.go:144] the created container "old-k8s-version-136598" has a running status.
	I1018 09:28:37.076712 1453282 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/old-k8s-version-136598/id_rsa...
	I1018 09:28:37.627286 1453282 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/old-k8s-version-136598/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 09:28:37.651061 1453282 cli_runner.go:164] Run: docker container inspect old-k8s-version-136598 --format={{.State.Status}}
	I1018 09:28:37.674895 1453282 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 09:28:37.674914 1453282 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-136598 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 09:28:37.725286 1453282 cli_runner.go:164] Run: docker container inspect old-k8s-version-136598 --format={{.State.Status}}
	I1018 09:28:37.746525 1453282 machine.go:93] provisionDockerMachine start ...
	I1018 09:28:37.746608 1453282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-136598
	I1018 09:28:37.771123 1453282 main.go:141] libmachine: Using SSH client type: native
	I1018 09:28:37.771448 1453282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34871 <nil> <nil>}
	I1018 09:28:37.771461 1453282 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:28:37.944037 1453282 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-136598
	
	I1018 09:28:37.944058 1453282 ubuntu.go:182] provisioning hostname "old-k8s-version-136598"
	I1018 09:28:37.944136 1453282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-136598
	I1018 09:28:37.966686 1453282 main.go:141] libmachine: Using SSH client type: native
	I1018 09:28:37.967062 1453282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34871 <nil> <nil>}
	I1018 09:28:37.967076 1453282 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-136598 && echo "old-k8s-version-136598" | sudo tee /etc/hostname
	I1018 09:28:38.138142 1453282 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-136598
	
	I1018 09:28:38.138231 1453282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-136598
	I1018 09:28:38.156566 1453282 main.go:141] libmachine: Using SSH client type: native
	I1018 09:28:38.156894 1453282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34871 <nil> <nil>}
	I1018 09:28:38.156912 1453282 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-136598' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-136598/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-136598' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:28:38.311926 1453282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:28:38.311957 1453282 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-1274243/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-1274243/.minikube}
	I1018 09:28:38.311975 1453282 ubuntu.go:190] setting up certificates
	I1018 09:28:38.311984 1453282 provision.go:84] configureAuth start
	I1018 09:28:38.312052 1453282 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-136598
	I1018 09:28:38.328550 1453282 provision.go:143] copyHostCerts
	I1018 09:28:38.328629 1453282 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem, removing ...
	I1018 09:28:38.328689 1453282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem
	I1018 09:28:38.328789 1453282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem (1078 bytes)
	I1018 09:28:38.328907 1453282 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem, removing ...
	I1018 09:28:38.328921 1453282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem
	I1018 09:28:38.328952 1453282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem (1123 bytes)
	I1018 09:28:38.329017 1453282 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem, removing ...
	I1018 09:28:38.329025 1453282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem
	I1018 09:28:38.329055 1453282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem (1675 bytes)
	I1018 09:28:38.329108 1453282 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-136598 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-136598]
	I1018 09:28:38.804301 1453282 provision.go:177] copyRemoteCerts
	I1018 09:28:38.804366 1453282 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:28:38.804405 1453282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-136598
	I1018 09:28:38.821977 1453282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34871 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/old-k8s-version-136598/id_rsa Username:docker}
	I1018 09:28:38.927339 1453282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1018 09:28:38.945079 1453282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:28:38.962265 1453282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:28:38.979799 1453282 provision.go:87] duration metric: took 667.8003ms to configureAuth
	I1018 09:28:38.979973 1453282 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:28:38.980193 1453282 config.go:182] Loaded profile config "old-k8s-version-136598": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 09:28:38.980316 1453282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-136598
	I1018 09:28:38.996760 1453282 main.go:141] libmachine: Using SSH client type: native
	I1018 09:28:38.997087 1453282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34871 <nil> <nil>}
	I1018 09:28:38.997107 1453282 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:28:39.260254 1453282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:28:39.260279 1453282 machine.go:96] duration metric: took 1.513737154s to provisionDockerMachine
	I1018 09:28:39.260289 1453282 client.go:171] duration metric: took 8.37952188s to LocalClient.Create
	I1018 09:28:39.260308 1453282 start.go:167] duration metric: took 8.37958998s to libmachine.API.Create "old-k8s-version-136598"
	I1018 09:28:39.260316 1453282 start.go:293] postStartSetup for "old-k8s-version-136598" (driver="docker")
	I1018 09:28:39.260326 1453282 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:28:39.260405 1453282 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:28:39.260445 1453282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-136598
	I1018 09:28:39.277384 1453282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34871 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/old-k8s-version-136598/id_rsa Username:docker}
	I1018 09:28:39.379677 1453282 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:28:39.383168 1453282 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:28:39.383204 1453282 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:28:39.383216 1453282 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-1274243/.minikube/addons for local assets ...
	I1018 09:28:39.383270 1453282 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-1274243/.minikube/files for local assets ...
	I1018 09:28:39.383347 1453282 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem -> 12760972.pem in /etc/ssl/certs
	I1018 09:28:39.383447 1453282 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:28:39.390775 1453282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem --> /etc/ssl/certs/12760972.pem (1708 bytes)
	I1018 09:28:39.407362 1453282 start.go:296] duration metric: took 147.032196ms for postStartSetup
	I1018 09:28:39.407716 1453282 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-136598
	I1018 09:28:39.423737 1453282 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/old-k8s-version-136598/config.json ...
	I1018 09:28:39.424040 1453282 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:28:39.424095 1453282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-136598
	I1018 09:28:39.440046 1453282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34871 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/old-k8s-version-136598/id_rsa Username:docker}
	I1018 09:28:39.540707 1453282 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:28:39.545213 1453282 start.go:128] duration metric: took 8.668100089s to createHost
	I1018 09:28:39.545238 1453282 start.go:83] releasing machines lock for "old-k8s-version-136598", held for 8.668229923s
	I1018 09:28:39.545306 1453282 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-136598
	I1018 09:28:39.563666 1453282 ssh_runner.go:195] Run: cat /version.json
	I1018 09:28:39.563741 1453282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-136598
	I1018 09:28:39.564016 1453282 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:28:39.564089 1453282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-136598
	I1018 09:28:39.585811 1453282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34871 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/old-k8s-version-136598/id_rsa Username:docker}
	I1018 09:28:39.603730 1453282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34871 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/old-k8s-version-136598/id_rsa Username:docker}
	I1018 09:28:39.687730 1453282 ssh_runner.go:195] Run: systemctl --version
	I1018 09:28:39.778989 1453282 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:28:39.817404 1453282 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:28:39.821777 1453282 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:28:39.821845 1453282 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:28:39.848814 1453282 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 09:28:39.848844 1453282 start.go:495] detecting cgroup driver to use...
	I1018 09:28:39.848880 1453282 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 09:28:39.848933 1453282 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:28:39.866905 1453282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:28:39.880219 1453282 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:28:39.880280 1453282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:28:39.896613 1453282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:28:39.915005 1453282 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:28:40.045963 1453282 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:28:40.200868 1453282 docker.go:234] disabling docker service ...
	I1018 09:28:40.200985 1453282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:28:40.225048 1453282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:28:40.239285 1453282 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:28:40.366836 1453282 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:28:40.482541 1453282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:28:40.498366 1453282 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:28:40.512312 1453282 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1018 09:28:40.512403 1453282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:28:40.521551 1453282 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 09:28:40.521658 1453282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:28:40.530915 1453282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:28:40.540294 1453282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:28:40.549735 1453282 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:28:40.557836 1453282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:28:40.567095 1453282 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:28:40.580632 1453282 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:28:40.589360 1453282 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:28:40.597964 1453282 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:28:40.605127 1453282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:28:40.719450 1453282 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:28:40.855961 1453282 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:28:40.856030 1453282 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:28:40.859681 1453282 start.go:563] Will wait 60s for crictl version
	I1018 09:28:40.859744 1453282 ssh_runner.go:195] Run: which crictl
	I1018 09:28:40.863710 1453282 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:28:40.892040 1453282 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:28:40.892124 1453282 ssh_runner.go:195] Run: crio --version
	I1018 09:28:40.920569 1453282 ssh_runner.go:195] Run: crio --version
	I1018 09:28:40.951380 1453282 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1018 09:28:40.954179 1453282 cli_runner.go:164] Run: docker network inspect old-k8s-version-136598 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:28:40.969711 1453282 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 09:28:40.973621 1453282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:28:40.983120 1453282 kubeadm.go:883] updating cluster {Name:old-k8s-version-136598 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-136598 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:28:40.983234 1453282 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 09:28:40.983297 1453282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:28:41.019058 1453282 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:28:41.019084 1453282 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:28:41.019144 1453282 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:28:41.044200 1453282 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:28:41.044223 1453282 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:28:41.044231 1453282 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1018 09:28:41.044319 1453282 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-136598 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-136598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:28:41.044412 1453282 ssh_runner.go:195] Run: crio config
	I1018 09:28:41.124628 1453282 cni.go:84] Creating CNI manager for ""
	I1018 09:28:41.124649 1453282 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:28:41.124667 1453282 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:28:41.124710 1453282 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-136598 NodeName:old-k8s-version-136598 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:28:41.124884 1453282 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-136598"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:28:41.124962 1453282 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1018 09:28:41.132594 1453282 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:28:41.132660 1453282 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:28:41.140194 1453282 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1018 09:28:41.152828 1453282 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:28:41.167170 1453282 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1018 09:28:41.183906 1453282 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:28:41.187670 1453282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:28:41.199004 1453282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:28:41.315330 1453282 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:28:41.334231 1453282 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/old-k8s-version-136598 for IP: 192.168.76.2
	I1018 09:28:41.334299 1453282 certs.go:195] generating shared ca certs ...
	I1018 09:28:41.334329 1453282 certs.go:227] acquiring lock for ca certs: {Name:mk38b7543cc5a8f0209b20a590824c94ad8d0609 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:28:41.334493 1453282 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.key
	I1018 09:28:41.334573 1453282 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.key
	I1018 09:28:41.334609 1453282 certs.go:257] generating profile certs ...
	I1018 09:28:41.334686 1453282 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/old-k8s-version-136598/client.key
	I1018 09:28:41.334722 1453282 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/old-k8s-version-136598/client.crt with IP's: []
	I1018 09:28:41.595289 1453282 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/old-k8s-version-136598/client.crt ...
	I1018 09:28:41.595321 1453282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/old-k8s-version-136598/client.crt: {Name:mk95162a47428742f1c5e936a36dc3b0b4ca9861 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:28:41.595554 1453282 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/old-k8s-version-136598/client.key ...
	I1018 09:28:41.595572 1453282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/old-k8s-version-136598/client.key: {Name:mk34f9081cba94f16979cdd16db0b9a2d5b990de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:28:41.595674 1453282 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/old-k8s-version-136598/apiserver.key.aba38fff
	I1018 09:28:41.595695 1453282 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/old-k8s-version-136598/apiserver.crt.aba38fff with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1018 09:28:41.939521 1453282 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/old-k8s-version-136598/apiserver.crt.aba38fff ...
	I1018 09:28:41.939559 1453282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/old-k8s-version-136598/apiserver.crt.aba38fff: {Name:mka4f9ecc10962d47ca01fbce046e6bc1df08eaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:28:41.939809 1453282 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/old-k8s-version-136598/apiserver.key.aba38fff ...
	I1018 09:28:41.939832 1453282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/old-k8s-version-136598/apiserver.key.aba38fff: {Name:mkedb7f7229c66e4f7c402622197bdeb18dd29d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:28:41.939947 1453282 certs.go:382] copying /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/old-k8s-version-136598/apiserver.crt.aba38fff -> /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/old-k8s-version-136598/apiserver.crt
	I1018 09:28:41.940037 1453282 certs.go:386] copying /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/old-k8s-version-136598/apiserver.key.aba38fff -> /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/old-k8s-version-136598/apiserver.key
	I1018 09:28:41.940101 1453282 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/old-k8s-version-136598/proxy-client.key
	I1018 09:28:41.940121 1453282 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/old-k8s-version-136598/proxy-client.crt with IP's: []
	I1018 09:28:42.311237 1453282 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/old-k8s-version-136598/proxy-client.crt ...
	I1018 09:28:42.311290 1453282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/old-k8s-version-136598/proxy-client.crt: {Name:mkb68024b29538f112b805bd7455000ed50e85c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:28:42.311526 1453282 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/old-k8s-version-136598/proxy-client.key ...
	I1018 09:28:42.311543 1453282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/old-k8s-version-136598/proxy-client.key: {Name:mk777e346f00da00dec644d36a913bd600e1e5d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:28:42.311753 1453282 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097.pem (1338 bytes)
	W1018 09:28:42.311800 1453282 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097_empty.pem, impossibly tiny 0 bytes
	I1018 09:28:42.311816 1453282 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:28:42.311863 1453282 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:28:42.311905 1453282 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:28:42.311937 1453282 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem (1675 bytes)
	I1018 09:28:42.311988 1453282 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem (1708 bytes)
	I1018 09:28:42.312682 1453282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:28:42.334424 1453282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 09:28:42.356302 1453282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:28:42.385424 1453282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:28:42.404364 1453282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/old-k8s-version-136598/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1018 09:28:42.424072 1453282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/old-k8s-version-136598/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 09:28:42.441825 1453282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/old-k8s-version-136598/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:28:42.459215 1453282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/old-k8s-version-136598/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 09:28:42.476908 1453282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:28:42.494681 1453282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097.pem --> /usr/share/ca-certificates/1276097.pem (1338 bytes)
	I1018 09:28:42.511743 1453282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem --> /usr/share/ca-certificates/12760972.pem (1708 bytes)
	I1018 09:28:42.529098 1453282 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:28:42.542261 1453282 ssh_runner.go:195] Run: openssl version
	I1018 09:28:42.548563 1453282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:28:42.556645 1453282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:28:42.560288 1453282 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:28:42.560376 1453282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:28:42.601340 1453282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:28:42.609822 1453282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1276097.pem && ln -fs /usr/share/ca-certificates/1276097.pem /etc/ssl/certs/1276097.pem"
	I1018 09:28:42.618804 1453282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1276097.pem
	I1018 09:28:42.622615 1453282 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 08:36 /usr/share/ca-certificates/1276097.pem
	I1018 09:28:42.622705 1453282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1276097.pem
	I1018 09:28:42.665499 1453282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1276097.pem /etc/ssl/certs/51391683.0"
	I1018 09:28:42.674493 1453282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12760972.pem && ln -fs /usr/share/ca-certificates/12760972.pem /etc/ssl/certs/12760972.pem"
	I1018 09:28:42.682758 1453282 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12760972.pem
	I1018 09:28:42.686286 1453282 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 08:36 /usr/share/ca-certificates/12760972.pem
	I1018 09:28:42.686347 1453282 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12760972.pem
	I1018 09:28:42.726857 1453282 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12760972.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:28:42.734976 1453282 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:28:42.738249 1453282 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 09:28:42.738311 1453282 kubeadm.go:400] StartCluster: {Name:old-k8s-version-136598 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-136598 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:28:42.738383 1453282 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:28:42.738443 1453282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:28:42.767555 1453282 cri.go:89] found id: ""
	I1018 09:28:42.767620 1453282 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:28:42.777908 1453282 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 09:28:42.785135 1453282 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 09:28:42.785224 1453282 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 09:28:42.793594 1453282 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 09:28:42.793612 1453282 kubeadm.go:157] found existing configuration files:
	
	I1018 09:28:42.793664 1453282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 09:28:42.801268 1453282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 09:28:42.801359 1453282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 09:28:42.808152 1453282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 09:28:42.815244 1453282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 09:28:42.815326 1453282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 09:28:42.822684 1453282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 09:28:42.830131 1453282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 09:28:42.830194 1453282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 09:28:42.837073 1453282 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 09:28:42.844251 1453282 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 09:28:42.844343 1453282 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 09:28:42.851408 1453282 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 09:28:42.896940 1453282 kubeadm.go:318] [init] Using Kubernetes version: v1.28.0
	I1018 09:28:42.897294 1453282 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 09:28:42.942046 1453282 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 09:28:42.942122 1453282 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 09:28:42.942165 1453282 kubeadm.go:318] OS: Linux
	I1018 09:28:42.942221 1453282 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 09:28:42.942276 1453282 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1018 09:28:42.942327 1453282 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 09:28:42.942381 1453282 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 09:28:42.942437 1453282 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 09:28:42.942490 1453282 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 09:28:42.942540 1453282 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 09:28:42.942594 1453282 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 09:28:42.942653 1453282 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1018 09:28:43.030643 1453282 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 09:28:43.030814 1453282 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 09:28:43.030945 1453282 kubeadm.go:318] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1018 09:28:43.183226 1453282 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 09:28:43.189425 1453282 out.go:252]   - Generating certificates and keys ...
	I1018 09:28:43.189535 1453282 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 09:28:43.189635 1453282 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 09:28:43.916376 1453282 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 09:28:44.308684 1453282 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 09:28:44.750217 1453282 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 09:28:44.895502 1453282 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 09:28:45.100941 1453282 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 09:28:45.101366 1453282 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-136598] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1018 09:28:45.965056 1453282 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 09:28:45.965632 1453282 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-136598] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1018 09:28:46.446333 1453282 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 09:28:46.921724 1453282 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 09:28:47.312293 1453282 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 09:28:47.312556 1453282 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 09:28:47.636642 1453282 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 09:28:47.934178 1453282 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 09:28:48.234557 1453282 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 09:28:49.459178 1453282 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 09:28:49.459975 1453282 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 09:28:49.462776 1453282 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 09:28:49.466609 1453282 out.go:252]   - Booting up control plane ...
	I1018 09:28:49.466724 1453282 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 09:28:49.466814 1453282 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 09:28:49.466889 1453282 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 09:28:49.483185 1453282 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 09:28:49.484220 1453282 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 09:28:49.484274 1453282 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 09:28:49.612230 1453282 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1018 09:28:58.115756 1453282 kubeadm.go:318] [apiclient] All control plane components are healthy after 8.503907 seconds
	I1018 09:28:58.115911 1453282 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 09:28:58.132538 1453282 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 09:28:58.656732 1453282 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 09:28:58.656964 1453282 kubeadm.go:318] [mark-control-plane] Marking the node old-k8s-version-136598 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 09:28:59.169358 1453282 kubeadm.go:318] [bootstrap-token] Using token: fef2ge.i0o416zvgav0ih7h
	I1018 09:28:59.172351 1453282 out.go:252]   - Configuring RBAC rules ...
	I1018 09:28:59.172486 1453282 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 09:28:59.177338 1453282 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 09:28:59.193601 1453282 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 09:28:59.197550 1453282 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 09:28:59.201466 1453282 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 09:28:59.205649 1453282 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 09:28:59.221163 1453282 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 09:28:59.515802 1453282 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 09:28:59.593170 1453282 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 09:28:59.597354 1453282 kubeadm.go:318] 
	I1018 09:28:59.597436 1453282 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 09:28:59.597446 1453282 kubeadm.go:318] 
	I1018 09:28:59.597527 1453282 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 09:28:59.597536 1453282 kubeadm.go:318] 
	I1018 09:28:59.597563 1453282 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 09:28:59.598513 1453282 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 09:28:59.598577 1453282 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 09:28:59.598585 1453282 kubeadm.go:318] 
	I1018 09:28:59.598641 1453282 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 09:28:59.598645 1453282 kubeadm.go:318] 
	I1018 09:28:59.598695 1453282 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 09:28:59.598705 1453282 kubeadm.go:318] 
	I1018 09:28:59.598760 1453282 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 09:28:59.598838 1453282 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 09:28:59.598909 1453282 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 09:28:59.598913 1453282 kubeadm.go:318] 
	I1018 09:28:59.599299 1453282 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 09:28:59.599420 1453282 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 09:28:59.599431 1453282 kubeadm.go:318] 
	I1018 09:28:59.599521 1453282 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token fef2ge.i0o416zvgav0ih7h \
	I1018 09:28:59.599630 1453282 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:96f90b4cb8c73febdc9c05bdeac126d36ffd51807995d3aad044a9ec3e33b953 \
	I1018 09:28:59.599651 1453282 kubeadm.go:318] 	--control-plane 
	I1018 09:28:59.599655 1453282 kubeadm.go:318] 
	I1018 09:28:59.599964 1453282 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 09:28:59.599976 1453282 kubeadm.go:318] 
	I1018 09:28:59.600277 1453282 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token fef2ge.i0o416zvgav0ih7h \
	I1018 09:28:59.600613 1453282 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:96f90b4cb8c73febdc9c05bdeac126d36ffd51807995d3aad044a9ec3e33b953 
	I1018 09:28:59.604568 1453282 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1018 09:28:59.604688 1453282 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 09:28:59.604703 1453282 cni.go:84] Creating CNI manager for ""
	I1018 09:28:59.604711 1453282 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:28:59.607804 1453282 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 09:28:59.610932 1453282 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 09:28:59.616294 1453282 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1018 09:28:59.616355 1453282 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 09:28:59.643251 1453282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 09:29:00.754346 1453282 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.111055533s)
	I1018 09:29:00.754394 1453282 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 09:29:00.754509 1453282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:29:00.754578 1453282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-136598 minikube.k8s.io/updated_at=2025_10_18T09_29_00_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820 minikube.k8s.io/name=old-k8s-version-136598 minikube.k8s.io/primary=true
	I1018 09:29:00.953901 1453282 ops.go:34] apiserver oom_adj: -16
	I1018 09:29:00.954013 1453282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:29:01.455002 1453282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:29:01.954841 1453282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:29:02.454126 1453282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:29:02.954142 1453282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:29:03.454656 1453282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:29:03.954300 1453282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:29:04.455108 1453282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:29:04.954631 1453282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:29:05.454931 1453282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:29:05.954139 1453282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:29:06.454719 1453282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:29:06.954914 1453282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:29:07.454139 1453282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:29:07.954089 1453282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:29:08.454331 1453282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:29:08.954635 1453282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:29:09.454396 1453282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:29:09.954360 1453282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:29:10.454787 1453282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:29:10.954398 1453282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:29:11.454927 1453282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:29:11.954131 1453282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:29:12.454443 1453282 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:29:12.554652 1453282 kubeadm.go:1113] duration metric: took 11.800188063s to wait for elevateKubeSystemPrivileges
	I1018 09:29:12.554685 1453282 kubeadm.go:402] duration metric: took 29.816379406s to StartCluster
	I1018 09:29:12.554703 1453282 settings.go:142] acquiring lock: {Name:mk4240955247baece9b9f2d9a5a8e189cb089184 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:29:12.554776 1453282 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 09:29:12.555700 1453282 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/kubeconfig: {Name:mk4be33efdd3c492a070116d2c0b6f8b234870aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:29:12.555939 1453282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 09:29:12.555955 1453282 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:29:12.556192 1453282 config.go:182] Loaded profile config "old-k8s-version-136598": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 09:29:12.556228 1453282 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:29:12.556283 1453282 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-136598"
	I1018 09:29:12.556297 1453282 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-136598"
	I1018 09:29:12.556319 1453282 host.go:66] Checking if "old-k8s-version-136598" exists ...
	I1018 09:29:12.556803 1453282 cli_runner.go:164] Run: docker container inspect old-k8s-version-136598 --format={{.State.Status}}
	I1018 09:29:12.557142 1453282 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-136598"
	I1018 09:29:12.557162 1453282 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-136598"
	I1018 09:29:12.557421 1453282 cli_runner.go:164] Run: docker container inspect old-k8s-version-136598 --format={{.State.Status}}
	I1018 09:29:12.559145 1453282 out.go:179] * Verifying Kubernetes components...
	I1018 09:29:12.563045 1453282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:29:12.593446 1453282 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-136598"
	I1018 09:29:12.593495 1453282 host.go:66] Checking if "old-k8s-version-136598" exists ...
	I1018 09:29:12.593913 1453282 cli_runner.go:164] Run: docker container inspect old-k8s-version-136598 --format={{.State.Status}}
	I1018 09:29:12.606326 1453282 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:29:12.610310 1453282 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:29:12.610335 1453282 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:29:12.610423 1453282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-136598
	I1018 09:29:12.643496 1453282 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:29:12.643516 1453282 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:29:12.643576 1453282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-136598
	I1018 09:29:12.653776 1453282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34871 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/old-k8s-version-136598/id_rsa Username:docker}
	I1018 09:29:12.692989 1453282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34871 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/old-k8s-version-136598/id_rsa Username:docker}
	I1018 09:29:12.866500 1453282 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 09:29:12.885136 1453282 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:29:12.906945 1453282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:29:12.962268 1453282 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:29:13.789299 1453282 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1018 09:29:13.790883 1453282 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-136598" to be "Ready" ...
	I1018 09:29:14.195093 1453282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.288054496s)
	I1018 09:29:14.195171 1453282 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.232829332s)
	I1018 09:29:14.208775 1453282 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1018 09:29:14.211605 1453282 addons.go:514] duration metric: took 1.655358768s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 09:29:14.296620 1453282 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-136598" context rescaled to 1 replicas
	W1018 09:29:15.794175 1453282 node_ready.go:57] node "old-k8s-version-136598" has "Ready":"False" status (will retry)
	W1018 09:29:17.794848 1453282 node_ready.go:57] node "old-k8s-version-136598" has "Ready":"False" status (will retry)
	W1018 09:29:20.294118 1453282 node_ready.go:57] node "old-k8s-version-136598" has "Ready":"False" status (will retry)
	W1018 09:29:22.294311 1453282 node_ready.go:57] node "old-k8s-version-136598" has "Ready":"False" status (will retry)
	W1018 09:29:24.294341 1453282 node_ready.go:57] node "old-k8s-version-136598" has "Ready":"False" status (will retry)
	W1018 09:29:26.294486 1453282 node_ready.go:57] node "old-k8s-version-136598" has "Ready":"False" status (will retry)
	I1018 09:29:27.294348 1453282 node_ready.go:49] node "old-k8s-version-136598" is "Ready"
	I1018 09:29:27.294378 1453282 node_ready.go:38] duration metric: took 13.503467339s for node "old-k8s-version-136598" to be "Ready" ...
	I1018 09:29:27.294392 1453282 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:29:27.294451 1453282 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:29:27.305864 1453282 api_server.go:72] duration metric: took 14.749878235s to wait for apiserver process to appear ...
	I1018 09:29:27.305894 1453282 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:29:27.305912 1453282 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:29:27.315453 1453282 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 09:29:27.317262 1453282 api_server.go:141] control plane version: v1.28.0
	I1018 09:29:27.317289 1453282 api_server.go:131] duration metric: took 11.387958ms to wait for apiserver health ...
	I1018 09:29:27.317299 1453282 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:29:27.321003 1453282 system_pods.go:59] 8 kube-system pods found
	I1018 09:29:27.321052 1453282 system_pods.go:61] "coredns-5dd5756b68-6ldkv" [1e371ff4-f811-4f25-be20-c7c6f4bbb347] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:29:27.321059 1453282 system_pods.go:61] "etcd-old-k8s-version-136598" [3580b96f-8145-4543-8a48-7c06a1b8ab0d] Running
	I1018 09:29:27.321069 1453282 system_pods.go:61] "kindnet-zff87" [552b54fd-9d0c-442d-85ad-d5675f145793] Running
	I1018 09:29:27.321075 1453282 system_pods.go:61] "kube-apiserver-old-k8s-version-136598" [0164658e-a89b-4e33-a6f1-94a20ad62371] Running
	I1018 09:29:27.321084 1453282 system_pods.go:61] "kube-controller-manager-old-k8s-version-136598" [09ab0e41-2e94-4af1-b7b6-fefc0aadc5c7] Running
	I1018 09:29:27.321089 1453282 system_pods.go:61] "kube-proxy-9pwdq" [67a838d5-5a9d-4f85-a92f-3b01432883a0] Running
	I1018 09:29:27.321093 1453282 system_pods.go:61] "kube-scheduler-old-k8s-version-136598" [c82d39bc-fecc-4b44-ad06-876277d2ae30] Running
	I1018 09:29:27.321100 1453282 system_pods.go:61] "storage-provisioner" [e560f819-0184-4ccc-9810-c40017f747e8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:29:27.321112 1453282 system_pods.go:74] duration metric: took 3.808354ms to wait for pod list to return data ...
	I1018 09:29:27.321121 1453282 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:29:27.323366 1453282 default_sa.go:45] found service account: "default"
	I1018 09:29:27.323388 1453282 default_sa.go:55] duration metric: took 2.258694ms for default service account to be created ...
	I1018 09:29:27.323397 1453282 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 09:29:27.326633 1453282 system_pods.go:86] 8 kube-system pods found
	I1018 09:29:27.326667 1453282 system_pods.go:89] "coredns-5dd5756b68-6ldkv" [1e371ff4-f811-4f25-be20-c7c6f4bbb347] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:29:27.326674 1453282 system_pods.go:89] "etcd-old-k8s-version-136598" [3580b96f-8145-4543-8a48-7c06a1b8ab0d] Running
	I1018 09:29:27.326680 1453282 system_pods.go:89] "kindnet-zff87" [552b54fd-9d0c-442d-85ad-d5675f145793] Running
	I1018 09:29:27.326685 1453282 system_pods.go:89] "kube-apiserver-old-k8s-version-136598" [0164658e-a89b-4e33-a6f1-94a20ad62371] Running
	I1018 09:29:27.326690 1453282 system_pods.go:89] "kube-controller-manager-old-k8s-version-136598" [09ab0e41-2e94-4af1-b7b6-fefc0aadc5c7] Running
	I1018 09:29:27.326699 1453282 system_pods.go:89] "kube-proxy-9pwdq" [67a838d5-5a9d-4f85-a92f-3b01432883a0] Running
	I1018 09:29:27.326704 1453282 system_pods.go:89] "kube-scheduler-old-k8s-version-136598" [c82d39bc-fecc-4b44-ad06-876277d2ae30] Running
	I1018 09:29:27.326712 1453282 system_pods.go:89] "storage-provisioner" [e560f819-0184-4ccc-9810-c40017f747e8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:29:27.326733 1453282 retry.go:31] will retry after 246.305559ms: missing components: kube-dns
	I1018 09:29:27.580823 1453282 system_pods.go:86] 8 kube-system pods found
	I1018 09:29:27.580872 1453282 system_pods.go:89] "coredns-5dd5756b68-6ldkv" [1e371ff4-f811-4f25-be20-c7c6f4bbb347] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:29:27.580882 1453282 system_pods.go:89] "etcd-old-k8s-version-136598" [3580b96f-8145-4543-8a48-7c06a1b8ab0d] Running
	I1018 09:29:27.580888 1453282 system_pods.go:89] "kindnet-zff87" [552b54fd-9d0c-442d-85ad-d5675f145793] Running
	I1018 09:29:27.580892 1453282 system_pods.go:89] "kube-apiserver-old-k8s-version-136598" [0164658e-a89b-4e33-a6f1-94a20ad62371] Running
	I1018 09:29:27.580900 1453282 system_pods.go:89] "kube-controller-manager-old-k8s-version-136598" [09ab0e41-2e94-4af1-b7b6-fefc0aadc5c7] Running
	I1018 09:29:27.580904 1453282 system_pods.go:89] "kube-proxy-9pwdq" [67a838d5-5a9d-4f85-a92f-3b01432883a0] Running
	I1018 09:29:27.580908 1453282 system_pods.go:89] "kube-scheduler-old-k8s-version-136598" [c82d39bc-fecc-4b44-ad06-876277d2ae30] Running
	I1018 09:29:27.580913 1453282 system_pods.go:89] "storage-provisioner" [e560f819-0184-4ccc-9810-c40017f747e8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:29:27.580930 1453282 retry.go:31] will retry after 343.586062ms: missing components: kube-dns
	I1018 09:29:27.934429 1453282 system_pods.go:86] 8 kube-system pods found
	I1018 09:29:27.934465 1453282 system_pods.go:89] "coredns-5dd5756b68-6ldkv" [1e371ff4-f811-4f25-be20-c7c6f4bbb347] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:29:27.934473 1453282 system_pods.go:89] "etcd-old-k8s-version-136598" [3580b96f-8145-4543-8a48-7c06a1b8ab0d] Running
	I1018 09:29:27.934479 1453282 system_pods.go:89] "kindnet-zff87" [552b54fd-9d0c-442d-85ad-d5675f145793] Running
	I1018 09:29:27.934485 1453282 system_pods.go:89] "kube-apiserver-old-k8s-version-136598" [0164658e-a89b-4e33-a6f1-94a20ad62371] Running
	I1018 09:29:27.934490 1453282 system_pods.go:89] "kube-controller-manager-old-k8s-version-136598" [09ab0e41-2e94-4af1-b7b6-fefc0aadc5c7] Running
	I1018 09:29:27.934494 1453282 system_pods.go:89] "kube-proxy-9pwdq" [67a838d5-5a9d-4f85-a92f-3b01432883a0] Running
	I1018 09:29:27.934498 1453282 system_pods.go:89] "kube-scheduler-old-k8s-version-136598" [c82d39bc-fecc-4b44-ad06-876277d2ae30] Running
	I1018 09:29:27.934504 1453282 system_pods.go:89] "storage-provisioner" [e560f819-0184-4ccc-9810-c40017f747e8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:29:27.934516 1453282 system_pods.go:126] duration metric: took 611.113417ms to wait for k8s-apps to be running ...
	I1018 09:29:27.934529 1453282 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 09:29:27.934582 1453282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:29:27.955245 1453282 system_svc.go:56] duration metric: took 20.704966ms WaitForService to wait for kubelet
	I1018 09:29:27.955325 1453282 kubeadm.go:586] duration metric: took 15.399344488s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:29:27.955360 1453282 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:29:27.961206 1453282 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 09:29:27.961239 1453282 node_conditions.go:123] node cpu capacity is 2
	I1018 09:29:27.961252 1453282 node_conditions.go:105] duration metric: took 5.871492ms to run NodePressure ...
	I1018 09:29:27.961264 1453282 start.go:241] waiting for startup goroutines ...
	I1018 09:29:27.961271 1453282 start.go:246] waiting for cluster config update ...
	I1018 09:29:27.961282 1453282 start.go:255] writing updated cluster config ...
	I1018 09:29:27.961581 1453282 ssh_runner.go:195] Run: rm -f paused
	I1018 09:29:27.965820 1453282 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:29:27.998910 1453282 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-6ldkv" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:29:28.006961 1453282 pod_ready.go:94] pod "coredns-5dd5756b68-6ldkv" is "Ready"
	I1018 09:29:28.007064 1453282 pod_ready.go:86] duration metric: took 8.070421ms for pod "coredns-5dd5756b68-6ldkv" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:29:28.010716 1453282 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-136598" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:29:28.016864 1453282 pod_ready.go:94] pod "etcd-old-k8s-version-136598" is "Ready"
	I1018 09:29:28.016951 1453282 pod_ready.go:86] duration metric: took 6.155488ms for pod "etcd-old-k8s-version-136598" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:29:28.020606 1453282 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-136598" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:29:28.027598 1453282 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-136598" is "Ready"
	I1018 09:29:28.027679 1453282 pod_ready.go:86] duration metric: took 6.962005ms for pod "kube-apiserver-old-k8s-version-136598" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:29:28.032625 1453282 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-136598" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:29:28.370202 1453282 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-136598" is "Ready"
	I1018 09:29:28.370230 1453282 pod_ready.go:86] duration metric: took 337.506765ms for pod "kube-controller-manager-old-k8s-version-136598" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:29:28.570773 1453282 pod_ready.go:83] waiting for pod "kube-proxy-9pwdq" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:29:28.970265 1453282 pod_ready.go:94] pod "kube-proxy-9pwdq" is "Ready"
	I1018 09:29:28.970343 1453282 pod_ready.go:86] duration metric: took 399.542258ms for pod "kube-proxy-9pwdq" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:29:29.170969 1453282 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-136598" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:29:29.570315 1453282 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-136598" is "Ready"
	I1018 09:29:29.570342 1453282 pod_ready.go:86] duration metric: took 399.341484ms for pod "kube-scheduler-old-k8s-version-136598" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:29:29.570355 1453282 pod_ready.go:40] duration metric: took 1.604507597s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:29:29.630097 1453282 start.go:624] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1018 09:29:29.633228 1453282 out.go:203] 
	W1018 09:29:29.636088 1453282 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1018 09:29:29.639067 1453282 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1018 09:29:29.642488 1453282 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-136598" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 09:29:27 old-k8s-version-136598 crio[835]: time="2025-10-18T09:29:27.613934217Z" level=info msg="Created container f55a5af9307648b35a5210da9b6a7ccdfa2173afe5a1a10695c91169a9e3ca26: kube-system/coredns-5dd5756b68-6ldkv/coredns" id=68a2da6a-d86b-430e-a0ab-40d3bde1f34c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:29:27 old-k8s-version-136598 crio[835]: time="2025-10-18T09:29:27.614743073Z" level=info msg="Starting container: f55a5af9307648b35a5210da9b6a7ccdfa2173afe5a1a10695c91169a9e3ca26" id=a175a6d8-d14d-48fe-b05b-436bff6d3bef name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:29:27 old-k8s-version-136598 crio[835]: time="2025-10-18T09:29:27.618986475Z" level=info msg="Started container" PID=1921 containerID=f55a5af9307648b35a5210da9b6a7ccdfa2173afe5a1a10695c91169a9e3ca26 description=kube-system/coredns-5dd5756b68-6ldkv/coredns id=a175a6d8-d14d-48fe-b05b-436bff6d3bef name=/runtime.v1.RuntimeService/StartContainer sandboxID=81f50309e0fd26d408d18de36b898a1edc465ba9b4183f4b44d373cb58406c51
	Oct 18 09:29:30 old-k8s-version-136598 crio[835]: time="2025-10-18T09:29:30.445216303Z" level=info msg="Running pod sandbox: default/busybox/POD" id=1ecbca0d-3edd-4c37-b499-b6a442a7f590 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:29:30 old-k8s-version-136598 crio[835]: time="2025-10-18T09:29:30.445297745Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:29:30 old-k8s-version-136598 crio[835]: time="2025-10-18T09:29:30.450756818Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:6b93301e1e5a6f7c2956263993497bcb162d436e5ba1fb801129c9701598fb7c UID:b6b0e2eb-5c14-4392-9e93-758c737f224d NetNS:/var/run/netns/ee045f94-3ac7-4ce2-9e14-739f87807975 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001379078}] Aliases:map[]}"
	Oct 18 09:29:30 old-k8s-version-136598 crio[835]: time="2025-10-18T09:29:30.450940813Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 18 09:29:30 old-k8s-version-136598 crio[835]: time="2025-10-18T09:29:30.462738458Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:6b93301e1e5a6f7c2956263993497bcb162d436e5ba1fb801129c9701598fb7c UID:b6b0e2eb-5c14-4392-9e93-758c737f224d NetNS:/var/run/netns/ee045f94-3ac7-4ce2-9e14-739f87807975 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001379078}] Aliases:map[]}"
	Oct 18 09:29:30 old-k8s-version-136598 crio[835]: time="2025-10-18T09:29:30.463042934Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 18 09:29:30 old-k8s-version-136598 crio[835]: time="2025-10-18T09:29:30.467755057Z" level=info msg="Ran pod sandbox 6b93301e1e5a6f7c2956263993497bcb162d436e5ba1fb801129c9701598fb7c with infra container: default/busybox/POD" id=1ecbca0d-3edd-4c37-b499-b6a442a7f590 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:29:30 old-k8s-version-136598 crio[835]: time="2025-10-18T09:29:30.470550737Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2fc7a7f7-dcab-4a3b-b928-54b438c30e45 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:29:30 old-k8s-version-136598 crio[835]: time="2025-10-18T09:29:30.470680292Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=2fc7a7f7-dcab-4a3b-b928-54b438c30e45 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:29:30 old-k8s-version-136598 crio[835]: time="2025-10-18T09:29:30.470719003Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=2fc7a7f7-dcab-4a3b-b928-54b438c30e45 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:29:30 old-k8s-version-136598 crio[835]: time="2025-10-18T09:29:30.472357005Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d8cd1877-1f98-49ac-80e5-b714bbc63cf0 name=/runtime.v1.ImageService/PullImage
	Oct 18 09:29:30 old-k8s-version-136598 crio[835]: time="2025-10-18T09:29:30.475012898Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 18 09:29:32 old-k8s-version-136598 crio[835]: time="2025-10-18T09:29:32.544965363Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=d8cd1877-1f98-49ac-80e5-b714bbc63cf0 name=/runtime.v1.ImageService/PullImage
	Oct 18 09:29:32 old-k8s-version-136598 crio[835]: time="2025-10-18T09:29:32.547143132Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c81986d7-055e-4169-9f4f-1ba9fadb09c1 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:29:32 old-k8s-version-136598 crio[835]: time="2025-10-18T09:29:32.548686917Z" level=info msg="Creating container: default/busybox/busybox" id=d547b312-a9e7-4771-b561-acf88cfa6dfc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:29:32 old-k8s-version-136598 crio[835]: time="2025-10-18T09:29:32.54946667Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:29:32 old-k8s-version-136598 crio[835]: time="2025-10-18T09:29:32.556163525Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:29:32 old-k8s-version-136598 crio[835]: time="2025-10-18T09:29:32.556805107Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:29:32 old-k8s-version-136598 crio[835]: time="2025-10-18T09:29:32.571080969Z" level=info msg="Created container 6bfe545ae2f0787949369743954650656d18d71908ebc99837d03870ab4b9d94: default/busybox/busybox" id=d547b312-a9e7-4771-b561-acf88cfa6dfc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:29:32 old-k8s-version-136598 crio[835]: time="2025-10-18T09:29:32.572394721Z" level=info msg="Starting container: 6bfe545ae2f0787949369743954650656d18d71908ebc99837d03870ab4b9d94" id=ee766fe0-0154-4f0f-99aa-04d584f3fc41 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:29:32 old-k8s-version-136598 crio[835]: time="2025-10-18T09:29:32.57406834Z" level=info msg="Started container" PID=1976 containerID=6bfe545ae2f0787949369743954650656d18d71908ebc99837d03870ab4b9d94 description=default/busybox/busybox id=ee766fe0-0154-4f0f-99aa-04d584f3fc41 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6b93301e1e5a6f7c2956263993497bcb162d436e5ba1fb801129c9701598fb7c
	Oct 18 09:29:38 old-k8s-version-136598 crio[835]: time="2025-10-18T09:29:38.977226682Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	6bfe545ae2f07       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago       Running             busybox                   0                   6b93301e1e5a6       busybox                                          default
	f55a5af930764       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      12 seconds ago      Running             coredns                   0                   81f50309e0fd2       coredns-5dd5756b68-6ldkv                         kube-system
	d03a81a9acc35       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago      Running             storage-provisioner       0                   d5df4b8b5d6db       storage-provisioner                              kube-system
	70aaf8d2f93f8       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    24 seconds ago      Running             kindnet-cni               0                   3ad7eeaa214ae       kindnet-zff87                                    kube-system
	67f88a3e0474b       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      27 seconds ago      Running             kube-proxy                0                   c732ef48ac5bc       kube-proxy-9pwdq                                 kube-system
	ff73c6934b2c8       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      48 seconds ago      Running             kube-controller-manager   0                   cf4c0c80e9a95       kube-controller-manager-old-k8s-version-136598   kube-system
	e98c6cd72aa27       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      48 seconds ago      Running             kube-apiserver            0                   5027960c5b656       kube-apiserver-old-k8s-version-136598            kube-system
	7a641c9d599ac       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      48 seconds ago      Running             kube-scheduler            0                   84d815ab70d0c       kube-scheduler-old-k8s-version-136598            kube-system
	3e3f22821eaf4       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      48 seconds ago      Running             etcd                      0                   1c0b1f250c717       etcd-old-k8s-version-136598                      kube-system
	
	
	==> coredns [f55a5af9307648b35a5210da9b6a7ccdfa2173afe5a1a10695c91169a9e3ca26] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:51256 - 53097 "HINFO IN 2363646576929560124.89178156631352042. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.013668912s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-136598
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-136598
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820
	                    minikube.k8s.io/name=old-k8s-version-136598
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_29_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:28:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-136598
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:29:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:29:30 +0000   Sat, 18 Oct 2025 09:28:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:29:30 +0000   Sat, 18 Oct 2025 09:28:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:29:30 +0000   Sat, 18 Oct 2025 09:28:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:29:30 +0000   Sat, 18 Oct 2025 09:29:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-136598
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                2ec042d2-4e94-4d4b-a1d0-dda9032068a7
	  Boot ID:                    65523ab2-bf15-4ba4-9086-da57024e96a9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-6ldkv                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     27s
	  kube-system                 etcd-old-k8s-version-136598                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         41s
	  kube-system                 kindnet-zff87                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-old-k8s-version-136598             250m (12%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-old-k8s-version-136598    200m (10%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-proxy-9pwdq                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-old-k8s-version-136598             100m (5%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 41s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s   kubelet          Node old-k8s-version-136598 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s   kubelet          Node old-k8s-version-136598 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s   kubelet          Node old-k8s-version-136598 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s   node-controller  Node old-k8s-version-136598 event: Registered Node old-k8s-version-136598 in Controller
	  Normal  NodeReady                14s   kubelet          Node old-k8s-version-136598 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct18 09:02] overlayfs: idmapped layers are currently not supported
	[Oct18 09:07] overlayfs: idmapped layers are currently not supported
	[ +35.005632] overlayfs: idmapped layers are currently not supported
	[Oct18 09:08] overlayfs: idmapped layers are currently not supported
	[Oct18 09:10] overlayfs: idmapped layers are currently not supported
	[Oct18 09:11] overlayfs: idmapped layers are currently not supported
	[Oct18 09:12] overlayfs: idmapped layers are currently not supported
	[Oct18 09:13] overlayfs: idmapped layers are currently not supported
	[  +9.741593] overlayfs: idmapped layers are currently not supported
	[Oct18 09:14] overlayfs: idmapped layers are currently not supported
	[ +29.155522] overlayfs: idmapped layers are currently not supported
	[Oct18 09:15] overlayfs: idmapped layers are currently not supported
	[ +11.661984] kauditd_printk_skb: 8 callbacks suppressed
	[ +12.494912] overlayfs: idmapped layers are currently not supported
	[Oct18 09:16] overlayfs: idmapped layers are currently not supported
	[Oct18 09:18] overlayfs: idmapped layers are currently not supported
	[Oct18 09:20] overlayfs: idmapped layers are currently not supported
	[  +1.396393] overlayfs: idmapped layers are currently not supported
	[Oct18 09:22] overlayfs: idmapped layers are currently not supported
	[ +27.782946] overlayfs: idmapped layers are currently not supported
	[Oct18 09:25] overlayfs: idmapped layers are currently not supported
	[Oct18 09:26] overlayfs: idmapped layers are currently not supported
	[Oct18 09:27] overlayfs: idmapped layers are currently not supported
	[Oct18 09:28] overlayfs: idmapped layers are currently not supported
	[ +34.309418] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [3e3f22821eaf4f84d747e8ef45c90c748db223e46dac230bf8697999451777b2] <==
	{"level":"info","ts":"2025-10-18T09:28:52.444116Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-10-18T09:28:52.455899Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-18T09:28:52.446093Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-18T09:28:52.456224Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-18T09:28:52.456296Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-18T09:28:52.446121Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-18T09:28:52.456385Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-18T09:28:53.101638Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-10-18T09:28:53.101685Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-10-18T09:28:53.101727Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-10-18T09:28:53.101742Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-10-18T09:28:53.101757Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-18T09:28:53.101777Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-10-18T09:28:53.101785Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-18T09:28:53.10805Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-136598 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-18T09:28:53.108132Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T09:28:53.10943Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-18T09:28:53.109593Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T09:28:53.109739Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-18T09:28:53.109788Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-18T09:28:53.108208Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T09:28:53.12275Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T09:28:53.122839Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T09:28:53.122866Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T09:28:53.123591Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 09:29:40 up 11:12,  0 user,  load average: 2.44, 3.06, 2.47
	Linux old-k8s-version-136598 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [70aaf8d2f93f8e8000892e7b69ab4c156eacee6aaa46cdb189558665872c3e00] <==
	I1018 09:29:16.118687       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:29:16.208620       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 09:29:16.208796       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:29:16.208813       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:29:16.208827       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:29:16Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:29:16.409781       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:29:16.409808       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:29:16.409817       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:29:16.410121       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 09:29:16.609984       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:29:16.610098       1 metrics.go:72] Registering metrics
	I1018 09:29:16.610189       1 controller.go:711] "Syncing nftables rules"
	I1018 09:29:26.413600       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:29:26.413646       1 main.go:301] handling current node
	I1018 09:29:36.409162       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:29:36.409217       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e98c6cd72aa27d3694c6182d6f83695cd3f102511b95b0129d2c765e8cfe14f4] <==
	I1018 09:28:56.569952       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 09:28:56.569987       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1018 09:28:56.592587       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1018 09:28:56.593073       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1018 09:28:56.593091       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1018 09:28:56.593321       1 aggregator.go:166] initial CRD sync complete...
	I1018 09:28:56.593339       1 autoregister_controller.go:141] Starting autoregister controller
	I1018 09:28:56.593345       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 09:28:56.593530       1 cache.go:39] Caches are synced for autoregister controller
	I1018 09:28:56.648717       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:28:57.203779       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 09:28:57.210970       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 09:28:57.210995       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:28:57.839518       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:28:57.889109       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:28:57.951943       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 09:28:57.959031       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1018 09:28:57.960311       1 controller.go:624] quota admission added evaluator for: endpoints
	I1018 09:28:57.968713       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 09:28:58.319389       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1018 09:28:59.499632       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1018 09:28:59.514569       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 09:28:59.526019       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1018 09:29:12.854438       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1018 09:29:12.899149       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [ff73c6934b2c85c9d960731ac66223c2a912c5e1bb9b69c5b4e784210db3a9bc] <==
	I1018 09:29:12.172752       1 shared_informer.go:318] Caches are synced for ephemeral
	I1018 09:29:12.177038       1 shared_informer.go:318] Caches are synced for resource quota
	I1018 09:29:12.180788       1 shared_informer.go:318] Caches are synced for resource quota
	I1018 09:29:12.551790       1 shared_informer.go:318] Caches are synced for garbage collector
	I1018 09:29:12.573174       1 shared_informer.go:318] Caches are synced for garbage collector
	I1018 09:29:12.573267       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1018 09:29:12.867524       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1018 09:29:12.949956       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-9pwdq"
	I1018 09:29:12.949982       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-zff87"
	I1018 09:29:13.064804       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-nnd67"
	I1018 09:29:13.088647       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-6ldkv"
	I1018 09:29:13.137032       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="271.561819ms"
	I1018 09:29:13.157514       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="20.435625ms"
	I1018 09:29:13.157588       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="44.339µs"
	I1018 09:29:13.873592       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1018 09:29:13.916962       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-nnd67"
	I1018 09:29:13.953851       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="81.57662ms"
	I1018 09:29:13.991636       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="37.743181ms"
	I1018 09:29:14.000832       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="139.959µs"
	I1018 09:29:26.920240       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="92.666µs"
	I1018 09:29:26.942159       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="59.231µs"
	I1018 09:29:27.069404       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1018 09:29:27.914649       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="82.902µs"
	I1018 09:29:27.964453       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.773844ms"
	I1018 09:29:27.964639       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="133.469µs"
	
	
	==> kube-proxy [67f88a3e0474b9bd47e557d07921a7ca1d59cdb31ea0d73685e25d4ae160efd9] <==
	I1018 09:29:13.513118       1 server_others.go:69] "Using iptables proxy"
	I1018 09:29:13.528083       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1018 09:29:13.599060       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:29:13.601007       1 server_others.go:152] "Using iptables Proxier"
	I1018 09:29:13.601038       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1018 09:29:13.601047       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1018 09:29:13.601078       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1018 09:29:13.604195       1 server.go:846] "Version info" version="v1.28.0"
	I1018 09:29:13.604213       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:29:13.606354       1 config.go:188] "Starting service config controller"
	I1018 09:29:13.606378       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1018 09:29:13.606405       1 config.go:97] "Starting endpoint slice config controller"
	I1018 09:29:13.606409       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1018 09:29:13.606983       1 config.go:315] "Starting node config controller"
	I1018 09:29:13.606996       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1018 09:29:13.707933       1 shared_informer.go:318] Caches are synced for node config
	I1018 09:29:13.707976       1 shared_informer.go:318] Caches are synced for service config
	I1018 09:29:13.708002       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [7a641c9d599ac29ae8ef6285c0c3a8b93300ca0ec25b33551c906b60ac269236] <==
	W1018 09:28:57.441665       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1018 09:28:57.441677       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1018 09:28:57.441698       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1018 09:28:57.441707       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1018 09:28:57.441757       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1018 09:28:57.441767       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1018 09:28:57.441767       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1018 09:28:57.441776       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1018 09:28:57.441825       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1018 09:28:57.441829       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1018 09:28:57.441835       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1018 09:28:57.441842       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1018 09:28:57.441886       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1018 09:28:57.441892       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1018 09:28:57.441897       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1018 09:28:57.441903       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1018 09:28:57.441945       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1018 09:28:57.441953       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1018 09:28:57.441953       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1018 09:28:57.441973       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1018 09:28:57.442011       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1018 09:28:57.442020       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1018 09:28:57.442655       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1018 09:28:57.442717       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1018 09:28:58.733149       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 18 09:29:12 old-k8s-version-136598 kubelet[1356]: I1018 09:29:12.984222    1356 topology_manager.go:215] "Topology Admit Handler" podUID="67a838d5-5a9d-4f85-a92f-3b01432883a0" podNamespace="kube-system" podName="kube-proxy-9pwdq"
	Oct 18 09:29:13 old-k8s-version-136598 kubelet[1356]: I1018 09:29:13.115033    1356 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/552b54fd-9d0c-442d-85ad-d5675f145793-cni-cfg\") pod \"kindnet-zff87\" (UID: \"552b54fd-9d0c-442d-85ad-d5675f145793\") " pod="kube-system/kindnet-zff87"
	Oct 18 09:29:13 old-k8s-version-136598 kubelet[1356]: I1018 09:29:13.115078    1356 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/552b54fd-9d0c-442d-85ad-d5675f145793-xtables-lock\") pod \"kindnet-zff87\" (UID: \"552b54fd-9d0c-442d-85ad-d5675f145793\") " pod="kube-system/kindnet-zff87"
	Oct 18 09:29:13 old-k8s-version-136598 kubelet[1356]: I1018 09:29:13.115112    1356 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/67a838d5-5a9d-4f85-a92f-3b01432883a0-kube-proxy\") pod \"kube-proxy-9pwdq\" (UID: \"67a838d5-5a9d-4f85-a92f-3b01432883a0\") " pod="kube-system/kube-proxy-9pwdq"
	Oct 18 09:29:13 old-k8s-version-136598 kubelet[1356]: I1018 09:29:13.115142    1356 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/67a838d5-5a9d-4f85-a92f-3b01432883a0-xtables-lock\") pod \"kube-proxy-9pwdq\" (UID: \"67a838d5-5a9d-4f85-a92f-3b01432883a0\") " pod="kube-system/kube-proxy-9pwdq"
	Oct 18 09:29:13 old-k8s-version-136598 kubelet[1356]: I1018 09:29:13.115263    1356 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/552b54fd-9d0c-442d-85ad-d5675f145793-lib-modules\") pod \"kindnet-zff87\" (UID: \"552b54fd-9d0c-442d-85ad-d5675f145793\") " pod="kube-system/kindnet-zff87"
	Oct 18 09:29:13 old-k8s-version-136598 kubelet[1356]: I1018 09:29:13.115299    1356 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srzhc\" (UniqueName: \"kubernetes.io/projected/67a838d5-5a9d-4f85-a92f-3b01432883a0-kube-api-access-srzhc\") pod \"kube-proxy-9pwdq\" (UID: \"67a838d5-5a9d-4f85-a92f-3b01432883a0\") " pod="kube-system/kube-proxy-9pwdq"
	Oct 18 09:29:13 old-k8s-version-136598 kubelet[1356]: I1018 09:29:13.115387    1356 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll5jz\" (UniqueName: \"kubernetes.io/projected/552b54fd-9d0c-442d-85ad-d5675f145793-kube-api-access-ll5jz\") pod \"kindnet-zff87\" (UID: \"552b54fd-9d0c-442d-85ad-d5675f145793\") " pod="kube-system/kindnet-zff87"
	Oct 18 09:29:13 old-k8s-version-136598 kubelet[1356]: I1018 09:29:13.115449    1356 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/67a838d5-5a9d-4f85-a92f-3b01432883a0-lib-modules\") pod \"kube-proxy-9pwdq\" (UID: \"67a838d5-5a9d-4f85-a92f-3b01432883a0\") " pod="kube-system/kube-proxy-9pwdq"
	Oct 18 09:29:13 old-k8s-version-136598 kubelet[1356]: W1018 09:29:13.302778    1356 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/396852f7b3ff57804dec4cde35210d902d03238e0cea7fff525cafcc0f4703cf/crio-3ad7eeaa214aea3a9f6077c02cbcd6856dd6b842fa2227f9743d6900f0c103ed WatchSource:0}: Error finding container 3ad7eeaa214aea3a9f6077c02cbcd6856dd6b842fa2227f9743d6900f0c103ed: Status 404 returned error can't find the container with id 3ad7eeaa214aea3a9f6077c02cbcd6856dd6b842fa2227f9743d6900f0c103ed
	Oct 18 09:29:16 old-k8s-version-136598 kubelet[1356]: I1018 09:29:16.889189    1356 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-9pwdq" podStartSLOduration=4.889095612 podCreationTimestamp="2025-10-18 09:29:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:29:13.928120376 +0000 UTC m=+14.467894054" watchObservedRunningTime="2025-10-18 09:29:16.889095612 +0000 UTC m=+17.428869298"
	Oct 18 09:29:19 old-k8s-version-136598 kubelet[1356]: I1018 09:29:19.657409    1356 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-zff87" podStartSLOduration=4.942614278 podCreationTimestamp="2025-10-18 09:29:12 +0000 UTC" firstStartedPulling="2025-10-18 09:29:13.309446134 +0000 UTC m=+13.849219795" lastFinishedPulling="2025-10-18 09:29:16.024198006 +0000 UTC m=+16.563971667" observedRunningTime="2025-10-18 09:29:16.88986675 +0000 UTC m=+17.429640419" watchObservedRunningTime="2025-10-18 09:29:19.65736615 +0000 UTC m=+20.197139819"
	Oct 18 09:29:26 old-k8s-version-136598 kubelet[1356]: I1018 09:29:26.888364    1356 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 18 09:29:26 old-k8s-version-136598 kubelet[1356]: I1018 09:29:26.919918    1356 topology_manager.go:215] "Topology Admit Handler" podUID="1e371ff4-f811-4f25-be20-c7c6f4bbb347" podNamespace="kube-system" podName="coredns-5dd5756b68-6ldkv"
	Oct 18 09:29:26 old-k8s-version-136598 kubelet[1356]: I1018 09:29:26.924796    1356 topology_manager.go:215] "Topology Admit Handler" podUID="e560f819-0184-4ccc-9810-c40017f747e8" podNamespace="kube-system" podName="storage-provisioner"
	Oct 18 09:29:27 old-k8s-version-136598 kubelet[1356]: I1018 09:29:27.114176    1356 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e371ff4-f811-4f25-be20-c7c6f4bbb347-config-volume\") pod \"coredns-5dd5756b68-6ldkv\" (UID: \"1e371ff4-f811-4f25-be20-c7c6f4bbb347\") " pod="kube-system/coredns-5dd5756b68-6ldkv"
	Oct 18 09:29:27 old-k8s-version-136598 kubelet[1356]: I1018 09:29:27.114232    1356 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e560f819-0184-4ccc-9810-c40017f747e8-tmp\") pod \"storage-provisioner\" (UID: \"e560f819-0184-4ccc-9810-c40017f747e8\") " pod="kube-system/storage-provisioner"
	Oct 18 09:29:27 old-k8s-version-136598 kubelet[1356]: I1018 09:29:27.114265    1356 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njlqp\" (UniqueName: \"kubernetes.io/projected/e560f819-0184-4ccc-9810-c40017f747e8-kube-api-access-njlqp\") pod \"storage-provisioner\" (UID: \"e560f819-0184-4ccc-9810-c40017f747e8\") " pod="kube-system/storage-provisioner"
	Oct 18 09:29:27 old-k8s-version-136598 kubelet[1356]: I1018 09:29:27.114290    1356 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbrg8\" (UniqueName: \"kubernetes.io/projected/1e371ff4-f811-4f25-be20-c7c6f4bbb347-kube-api-access-dbrg8\") pod \"coredns-5dd5756b68-6ldkv\" (UID: \"1e371ff4-f811-4f25-be20-c7c6f4bbb347\") " pod="kube-system/coredns-5dd5756b68-6ldkv"
	Oct 18 09:29:27 old-k8s-version-136598 kubelet[1356]: W1018 09:29:27.535220    1356 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/396852f7b3ff57804dec4cde35210d902d03238e0cea7fff525cafcc0f4703cf/crio-d5df4b8b5d6dbe245ddcaaaad32591d036d5b6a5680b4e6b15d3bc58bcf2ce2f WatchSource:0}: Error finding container d5df4b8b5d6dbe245ddcaaaad32591d036d5b6a5680b4e6b15d3bc58bcf2ce2f: Status 404 returned error can't find the container with id d5df4b8b5d6dbe245ddcaaaad32591d036d5b6a5680b4e6b15d3bc58bcf2ce2f
	Oct 18 09:29:27 old-k8s-version-136598 kubelet[1356]: W1018 09:29:27.566746    1356 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/396852f7b3ff57804dec4cde35210d902d03238e0cea7fff525cafcc0f4703cf/crio-81f50309e0fd26d408d18de36b898a1edc465ba9b4183f4b44d373cb58406c51 WatchSource:0}: Error finding container 81f50309e0fd26d408d18de36b898a1edc465ba9b4183f4b44d373cb58406c51: Status 404 returned error can't find the container with id 81f50309e0fd26d408d18de36b898a1edc465ba9b4183f4b44d373cb58406c51
	Oct 18 09:29:27 old-k8s-version-136598 kubelet[1356]: I1018 09:29:27.951871    1356 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-6ldkv" podStartSLOduration=14.951752383 podCreationTimestamp="2025-10-18 09:29:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:29:27.91383731 +0000 UTC m=+28.453610988" watchObservedRunningTime="2025-10-18 09:29:27.951752383 +0000 UTC m=+28.491526044"
	Oct 18 09:29:29 old-k8s-version-136598 kubelet[1356]: I1018 09:29:29.842015    1356 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.841949602 podCreationTimestamp="2025-10-18 09:29:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:29:27.998882084 +0000 UTC m=+28.538655753" watchObservedRunningTime="2025-10-18 09:29:29.841949602 +0000 UTC m=+30.381723271"
	Oct 18 09:29:29 old-k8s-version-136598 kubelet[1356]: I1018 09:29:29.843021    1356 topology_manager.go:215] "Topology Admit Handler" podUID="b6b0e2eb-5c14-4392-9e93-758c737f224d" podNamespace="default" podName="busybox"
	Oct 18 09:29:30 old-k8s-version-136598 kubelet[1356]: I1018 09:29:30.036750    1356 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xptnj\" (UniqueName: \"kubernetes.io/projected/b6b0e2eb-5c14-4392-9e93-758c737f224d-kube-api-access-xptnj\") pod \"busybox\" (UID: \"b6b0e2eb-5c14-4392-9e93-758c737f224d\") " pod="default/busybox"
	
	
	==> storage-provisioner [d03a81a9acc356336c7b7ffa68d5d53c4315b8a80b10064e819e11fc57f80e3d] <==
	I1018 09:29:27.604352       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 09:29:27.622030       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 09:29:27.622185       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1018 09:29:27.639243       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 09:29:27.639446       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-136598_528f55a2-7f77-4db2-be5a-53478805edd9!
	I1018 09:29:27.640160       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fc909686-371c-405c-ab2b-9cef06488b3d", APIVersion:"v1", ResourceVersion:"408", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-136598_528f55a2-7f77-4db2-be5a-53478805edd9 became leader
	I1018 09:29:27.740017       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-136598_528f55a2-7f77-4db2-be5a-53478805edd9!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-136598 -n old-k8s-version-136598
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-136598 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (7.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-136598 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-136598 --alsologtostderr -v=1: exit status 80 (2.554118304s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-136598 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:30:55.980444 1460394 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:30:55.983980 1460394 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:30:55.983996 1460394 out.go:374] Setting ErrFile to fd 2...
	I1018 09:30:55.984002 1460394 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:30:55.984280 1460394 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 09:30:55.984556 1460394 out.go:368] Setting JSON to false
	I1018 09:30:55.984587 1460394 mustload.go:65] Loading cluster: old-k8s-version-136598
	I1018 09:30:55.984994 1460394 config.go:182] Loaded profile config "old-k8s-version-136598": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 09:30:55.985441 1460394 cli_runner.go:164] Run: docker container inspect old-k8s-version-136598 --format={{.State.Status}}
	I1018 09:30:56.014957 1460394 host.go:66] Checking if "old-k8s-version-136598" exists ...
	I1018 09:30:56.015302 1460394 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:30:56.102852 1460394 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 09:30:56.089573164 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:30:56.103624 1460394 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-136598 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1018 09:30:56.108695 1460394 out.go:179] * Pausing node old-k8s-version-136598 ... 
	I1018 09:30:56.111905 1460394 host.go:66] Checking if "old-k8s-version-136598" exists ...
	I1018 09:30:56.112223 1460394 ssh_runner.go:195] Run: systemctl --version
	I1018 09:30:56.112273 1460394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-136598
	I1018 09:30:56.132660 1460394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34876 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/old-k8s-version-136598/id_rsa Username:docker}
	I1018 09:30:56.238618 1460394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:30:56.256741 1460394 pause.go:52] kubelet running: true
	I1018 09:30:56.256818 1460394 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:30:56.569000 1460394 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:30:56.569230 1460394 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:30:56.677186 1460394 cri.go:89] found id: "8386a4b7788357740c93bd64a1e18437f5db13598342e846f345da3ae2796669"
	I1018 09:30:56.677209 1460394 cri.go:89] found id: "5ddf89e58fd0231dcb686b1ec34b78eba7c7382651d2a6669db94b08a351ad8f"
	I1018 09:30:56.677214 1460394 cri.go:89] found id: "4ed79a13689cc2b55515752533f3bc48fdf545aa3639d366a9d3b722274b9426"
	I1018 09:30:56.677218 1460394 cri.go:89] found id: "6bc656f23e97f2a26e96b76c07156bef906ac7390f71c3992b639108a73b3b77"
	I1018 09:30:56.677221 1460394 cri.go:89] found id: "83f2dfcd2a9a025f2474d147b1054078bec3a16567dfd26d3cf9d202de3cda59"
	I1018 09:30:56.677236 1460394 cri.go:89] found id: "eeff6e8a782500b5b3c99df3f19a42ccbe900a5e69acb513548515106b6b820b"
	I1018 09:30:56.677240 1460394 cri.go:89] found id: "4744dbee055e10babc2fda11917557a08f5b523b1ba26af96a30f9d1e4200027"
	I1018 09:30:56.677248 1460394 cri.go:89] found id: "f32a0ff7855256e0fb8bf0a5004fc3ea08393ad5781721d98ea878404aa56ba5"
	I1018 09:30:56.677253 1460394 cri.go:89] found id: "8eda135cfc037fbd8f05a4e7cfe080a910c91820b87edf4d7370b9af44b3bbc5"
	I1018 09:30:56.677259 1460394 cri.go:89] found id: "4b8a746cb4e9fe3d6643315df4c667c90b518acd180e1c16a65a071f5685c2b4"
	I1018 09:30:56.677262 1460394 cri.go:89] found id: "e5e655d2337a235dfde07e733ece6522bca028878923664cb1aa29ce8e0720ec"
	I1018 09:30:56.677265 1460394 cri.go:89] found id: ""
	I1018 09:30:56.677325 1460394 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:30:56.692160 1460394 retry.go:31] will retry after 184.457634ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:30:56Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:30:56.877579 1460394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:30:56.904529 1460394 pause.go:52] kubelet running: false
	I1018 09:30:56.904605 1460394 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:30:57.206371 1460394 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:30:57.206444 1460394 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:30:57.331682 1460394 cri.go:89] found id: "8386a4b7788357740c93bd64a1e18437f5db13598342e846f345da3ae2796669"
	I1018 09:30:57.331702 1460394 cri.go:89] found id: "5ddf89e58fd0231dcb686b1ec34b78eba7c7382651d2a6669db94b08a351ad8f"
	I1018 09:30:57.331727 1460394 cri.go:89] found id: "4ed79a13689cc2b55515752533f3bc48fdf545aa3639d366a9d3b722274b9426"
	I1018 09:30:57.331731 1460394 cri.go:89] found id: "6bc656f23e97f2a26e96b76c07156bef906ac7390f71c3992b639108a73b3b77"
	I1018 09:30:57.331735 1460394 cri.go:89] found id: "83f2dfcd2a9a025f2474d147b1054078bec3a16567dfd26d3cf9d202de3cda59"
	I1018 09:30:57.331739 1460394 cri.go:89] found id: "eeff6e8a782500b5b3c99df3f19a42ccbe900a5e69acb513548515106b6b820b"
	I1018 09:30:57.331742 1460394 cri.go:89] found id: "4744dbee055e10babc2fda11917557a08f5b523b1ba26af96a30f9d1e4200027"
	I1018 09:30:57.331745 1460394 cri.go:89] found id: "f32a0ff7855256e0fb8bf0a5004fc3ea08393ad5781721d98ea878404aa56ba5"
	I1018 09:30:57.331748 1460394 cri.go:89] found id: "8eda135cfc037fbd8f05a4e7cfe080a910c91820b87edf4d7370b9af44b3bbc5"
	I1018 09:30:57.331754 1460394 cri.go:89] found id: "4b8a746cb4e9fe3d6643315df4c667c90b518acd180e1c16a65a071f5685c2b4"
	I1018 09:30:57.331757 1460394 cri.go:89] found id: "e5e655d2337a235dfde07e733ece6522bca028878923664cb1aa29ce8e0720ec"
	I1018 09:30:57.331760 1460394 cri.go:89] found id: ""
	I1018 09:30:57.331806 1460394 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:30:57.343598 1460394 retry.go:31] will retry after 459.172464ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:30:57Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:30:57.803175 1460394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:30:57.823730 1460394 pause.go:52] kubelet running: false
	I1018 09:30:57.823795 1460394 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:30:58.223267 1460394 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:30:58.223354 1460394 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:30:58.374537 1460394 cri.go:89] found id: "8386a4b7788357740c93bd64a1e18437f5db13598342e846f345da3ae2796669"
	I1018 09:30:58.374561 1460394 cri.go:89] found id: "5ddf89e58fd0231dcb686b1ec34b78eba7c7382651d2a6669db94b08a351ad8f"
	I1018 09:30:58.374567 1460394 cri.go:89] found id: "4ed79a13689cc2b55515752533f3bc48fdf545aa3639d366a9d3b722274b9426"
	I1018 09:30:58.374571 1460394 cri.go:89] found id: "6bc656f23e97f2a26e96b76c07156bef906ac7390f71c3992b639108a73b3b77"
	I1018 09:30:58.374574 1460394 cri.go:89] found id: "83f2dfcd2a9a025f2474d147b1054078bec3a16567dfd26d3cf9d202de3cda59"
	I1018 09:30:58.374578 1460394 cri.go:89] found id: "eeff6e8a782500b5b3c99df3f19a42ccbe900a5e69acb513548515106b6b820b"
	I1018 09:30:58.374594 1460394 cri.go:89] found id: "4744dbee055e10babc2fda11917557a08f5b523b1ba26af96a30f9d1e4200027"
	I1018 09:30:58.374602 1460394 cri.go:89] found id: "f32a0ff7855256e0fb8bf0a5004fc3ea08393ad5781721d98ea878404aa56ba5"
	I1018 09:30:58.374606 1460394 cri.go:89] found id: "8eda135cfc037fbd8f05a4e7cfe080a910c91820b87edf4d7370b9af44b3bbc5"
	I1018 09:30:58.374613 1460394 cri.go:89] found id: "4b8a746cb4e9fe3d6643315df4c667c90b518acd180e1c16a65a071f5685c2b4"
	I1018 09:30:58.374621 1460394 cri.go:89] found id: "e5e655d2337a235dfde07e733ece6522bca028878923664cb1aa29ce8e0720ec"
	I1018 09:30:58.374624 1460394 cri.go:89] found id: ""
	I1018 09:30:58.374673 1460394 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:30:58.411004 1460394 out.go:203] 
	W1018 09:30:58.414054 1460394 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:30:58Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:30:58Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:30:58.414074 1460394 out.go:285] * 
	* 
	W1018 09:30:58.429501 1460394 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:30:58.433387 1460394 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-136598 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-136598
helpers_test.go:243: (dbg) docker inspect old-k8s-version-136598:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "396852f7b3ff57804dec4cde35210d902d03238e0cea7fff525cafcc0f4703cf",
	        "Created": "2025-10-18T09:28:36.683322169Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1456902,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:29:53.989711946Z",
	            "FinishedAt": "2025-10-18T09:29:53.188374573Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/396852f7b3ff57804dec4cde35210d902d03238e0cea7fff525cafcc0f4703cf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/396852f7b3ff57804dec4cde35210d902d03238e0cea7fff525cafcc0f4703cf/hostname",
	        "HostsPath": "/var/lib/docker/containers/396852f7b3ff57804dec4cde35210d902d03238e0cea7fff525cafcc0f4703cf/hosts",
	        "LogPath": "/var/lib/docker/containers/396852f7b3ff57804dec4cde35210d902d03238e0cea7fff525cafcc0f4703cf/396852f7b3ff57804dec4cde35210d902d03238e0cea7fff525cafcc0f4703cf-json.log",
	        "Name": "/old-k8s-version-136598",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-136598:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-136598",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "396852f7b3ff57804dec4cde35210d902d03238e0cea7fff525cafcc0f4703cf",
	                "LowerDir": "/var/lib/docker/overlay2/84864c1490f58ee53ad331d94af688b88ad3a7e940f19ffed23dd53ccaba1716-init/diff:/var/lib/docker/overlay2/60519750c2737db0dfdb37adf468f6414129c2b5dcb7218ea59afbc63bfd537f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/84864c1490f58ee53ad331d94af688b88ad3a7e940f19ffed23dd53ccaba1716/merged",
	                "UpperDir": "/var/lib/docker/overlay2/84864c1490f58ee53ad331d94af688b88ad3a7e940f19ffed23dd53ccaba1716/diff",
	                "WorkDir": "/var/lib/docker/overlay2/84864c1490f58ee53ad331d94af688b88ad3a7e940f19ffed23dd53ccaba1716/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-136598",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-136598/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-136598",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-136598",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-136598",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "be8e48474839a8f344189531340360bed972b402e117bb71f190aaae67413002",
	            "SandboxKey": "/var/run/docker/netns/be8e48474839",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34876"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34877"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34880"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34878"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34879"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-136598": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:01:c3:3e:3e:60",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0ac75cd444c8d84d3c10418ebf74369e7543fa159203a9e520092b626fcf4011",
	                    "EndpointID": "6d3698a955d35c8f6f3f95b0018cb6cccadad918d6d7c23487db948bf663ec09",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-136598",
	                        "396852f7b3ff"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-136598 -n old-k8s-version-136598
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-136598 -n old-k8s-version-136598: exit status 2 (581.848241ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-136598 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-136598 logs -n 25: (1.737998707s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-275703 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-275703             │ jenkins │ v1.37.0 │ 18 Oct 25 09:26 UTC │                     │
	│ ssh     │ -p cilium-275703 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-275703             │ jenkins │ v1.37.0 │ 18 Oct 25 09:26 UTC │                     │
	│ ssh     │ -p cilium-275703 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-275703             │ jenkins │ v1.37.0 │ 18 Oct 25 09:26 UTC │                     │
	│ ssh     │ -p cilium-275703 sudo crio config                                                                                                                                                                                                             │ cilium-275703             │ jenkins │ v1.37.0 │ 18 Oct 25 09:26 UTC │                     │
	│ delete  │ -p cilium-275703                                                                                                                                                                                                                              │ cilium-275703             │ jenkins │ v1.37.0 │ 18 Oct 25 09:26 UTC │ 18 Oct 25 09:26 UTC │
	│ start   │ -p force-systemd-env-406177 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-406177  │ jenkins │ v1.37.0 │ 18 Oct 25 09:26 UTC │ 18 Oct 25 09:26 UTC │
	│ delete  │ -p force-systemd-env-406177                                                                                                                                                                                                                   │ force-systemd-env-406177  │ jenkins │ v1.37.0 │ 18 Oct 25 09:26 UTC │ 18 Oct 25 09:26 UTC │
	│ start   │ -p cert-expiration-854768 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-854768    │ jenkins │ v1.37.0 │ 18 Oct 25 09:26 UTC │ 18 Oct 25 09:27 UTC │
	│ start   │ -p kubernetes-upgrade-757858 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-757858 │ jenkins │ v1.37.0 │ 18 Oct 25 09:27 UTC │                     │
	│ start   │ -p kubernetes-upgrade-757858 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-757858 │ jenkins │ v1.37.0 │ 18 Oct 25 09:27 UTC │ 18 Oct 25 09:27 UTC │
	│ delete  │ -p kubernetes-upgrade-757858                                                                                                                                                                                                                  │ kubernetes-upgrade-757858 │ jenkins │ v1.37.0 │ 18 Oct 25 09:27 UTC │ 18 Oct 25 09:27 UTC │
	│ start   │ -p cert-options-783705 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-783705       │ jenkins │ v1.37.0 │ 18 Oct 25 09:27 UTC │ 18 Oct 25 09:28 UTC │
	│ ssh     │ cert-options-783705 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-783705       │ jenkins │ v1.37.0 │ 18 Oct 25 09:28 UTC │ 18 Oct 25 09:28 UTC │
	│ ssh     │ -p cert-options-783705 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-783705       │ jenkins │ v1.37.0 │ 18 Oct 25 09:28 UTC │ 18 Oct 25 09:28 UTC │
	│ delete  │ -p cert-options-783705                                                                                                                                                                                                                        │ cert-options-783705       │ jenkins │ v1.37.0 │ 18 Oct 25 09:28 UTC │ 18 Oct 25 09:28 UTC │
	│ start   │ -p old-k8s-version-136598 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-136598    │ jenkins │ v1.37.0 │ 18 Oct 25 09:28 UTC │ 18 Oct 25 09:29 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-136598 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-136598    │ jenkins │ v1.37.0 │ 18 Oct 25 09:29 UTC │                     │
	│ stop    │ -p old-k8s-version-136598 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-136598    │ jenkins │ v1.37.0 │ 18 Oct 25 09:29 UTC │ 18 Oct 25 09:29 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-136598 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-136598    │ jenkins │ v1.37.0 │ 18 Oct 25 09:29 UTC │ 18 Oct 25 09:29 UTC │
	│ start   │ -p old-k8s-version-136598 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-136598    │ jenkins │ v1.37.0 │ 18 Oct 25 09:29 UTC │ 18 Oct 25 09:30 UTC │
	│ start   │ -p cert-expiration-854768 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-854768    │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │ 18 Oct 25 09:30 UTC │
	│ delete  │ -p cert-expiration-854768                                                                                                                                                                                                                     │ cert-expiration-854768    │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │ 18 Oct 25 09:30 UTC │
	│ image   │ old-k8s-version-136598 image list --format=json                                                                                                                                                                                               │ old-k8s-version-136598    │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │ 18 Oct 25 09:30 UTC │
	│ pause   │ -p old-k8s-version-136598 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-136598    │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │                     │
	│ start   │ -p no-preload-886951 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-886951         │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:30:56
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:30:56.115725 1460427 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:30:56.115988 1460427 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:30:56.116001 1460427 out.go:374] Setting ErrFile to fd 2...
	I1018 09:30:56.116007 1460427 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:30:56.116306 1460427 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 09:30:56.116822 1460427 out.go:368] Setting JSON to false
	I1018 09:30:56.118493 1460427 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":40404,"bootTime":1760739453,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1018 09:30:56.118580 1460427 start.go:141] virtualization:  
	I1018 09:30:56.122464 1460427 out.go:179] * [no-preload-886951] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 09:30:56.125554 1460427 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 09:30:56.125745 1460427 notify.go:220] Checking for updates...
	I1018 09:30:56.131626 1460427 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:30:56.135449 1460427 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 09:30:56.138356 1460427 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-1274243/.minikube
	I1018 09:30:56.141330 1460427 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 09:30:56.144194 1460427 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:30:56.148395 1460427 config.go:182] Loaded profile config "old-k8s-version-136598": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 09:30:56.148519 1460427 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:30:56.182681 1460427 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 09:30:56.183494 1460427 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:30:56.271872 1460427 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 09:30:56.259714037 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:30:56.271982 1460427 docker.go:318] overlay module found
	I1018 09:30:56.275227 1460427 out.go:179] * Using the docker driver based on user configuration
	I1018 09:30:56.278208 1460427 start.go:305] selected driver: docker
	I1018 09:30:56.278226 1460427 start.go:925] validating driver "docker" against <nil>
	I1018 09:30:56.278246 1460427 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:30:56.278912 1460427 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:30:56.391219 1460427 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 09:30:56.382075111 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:30:56.391378 1460427 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 09:30:56.392935 1460427 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:30:56.396106 1460427 out.go:179] * Using Docker driver with root privileges
	I1018 09:30:56.399078 1460427 cni.go:84] Creating CNI manager for ""
	I1018 09:30:56.399154 1460427 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:30:56.399169 1460427 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 09:30:56.399251 1460427 start.go:349] cluster config:
	{Name:no-preload-886951 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-886951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:30:56.402340 1460427 out.go:179] * Starting "no-preload-886951" primary control-plane node in "no-preload-886951" cluster
	I1018 09:30:56.405135 1460427 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:30:56.408021 1460427 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:30:56.410864 1460427 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:30:56.411017 1460427 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/config.json ...
	I1018 09:30:56.411058 1460427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/config.json: {Name:mk60553321cd9c490bd7767b79255ad2bc4ad3b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:30:56.411271 1460427 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:30:56.411517 1460427 cache.go:107] acquiring lock: {Name:mkaa43f9374ace13fbeea7697fbebfe03a59b228 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:30:56.411583 1460427 cache.go:115] /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1018 09:30:56.411597 1460427 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 89.491µs
	I1018 09:30:56.411611 1460427 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1018 09:30:56.411622 1460427 cache.go:107] acquiring lock: {Name:mkbebba4bc705d659ee66bc0af56d117598bf518 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:30:56.411704 1460427 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1018 09:30:56.412101 1460427 cache.go:107] acquiring lock: {Name:mk55ca2130ad8720b5d4e30a3e3aca89f3adaf85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:30:56.412200 1460427 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 09:30:56.412434 1460427 cache.go:107] acquiring lock: {Name:mk181c56341c6ab3c8b820245c38e1f457dfcfbe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:30:56.412563 1460427 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1018 09:30:56.412783 1460427 cache.go:107] acquiring lock: {Name:mk23edb8e930744ec07884b432879c4ea00b2405 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:30:56.412879 1460427 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1018 09:30:56.413117 1460427 cache.go:107] acquiring lock: {Name:mk8d3760b83fd8a7218910885f73a4559e163755 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:30:56.413207 1460427 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1018 09:30:56.413449 1460427 cache.go:107] acquiring lock: {Name:mkccda2c66e79badbf58f1b3c791a60ea2d0dd4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:30:56.413600 1460427 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1018 09:30:56.413825 1460427 cache.go:107] acquiring lock: {Name:mk3f05ac3a6df0aaf5c01de1c3278a44e71a1ede Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:30:56.413936 1460427 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1018 09:30:56.417962 1460427 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1018 09:30:56.418600 1460427 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 09:30:56.418841 1460427 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1018 09:30:56.419039 1460427 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1018 09:30:56.419269 1460427 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1018 09:30:56.419449 1460427 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1018 09:30:56.419633 1460427 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1018 09:30:56.445399 1460427 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:30:56.445428 1460427 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:30:56.445442 1460427 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:30:56.445463 1460427 start.go:360] acquireMachinesLock for no-preload-886951: {Name:mk1b35ce5d45058835b57539f98f93aa21da27b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:30:56.445565 1460427 start.go:364] duration metric: took 82.107µs to acquireMachinesLock for "no-preload-886951"
	I1018 09:30:56.445604 1460427 start.go:93] Provisioning new machine with config: &{Name:no-preload-886951 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-886951 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:30:56.445669 1460427 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Oct 18 09:30:44 old-k8s-version-136598 crio[650]: time="2025-10-18T09:30:44.279924482Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:30:44 old-k8s-version-136598 crio[650]: time="2025-10-18T09:30:44.287142923Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:30:44 old-k8s-version-136598 crio[650]: time="2025-10-18T09:30:44.287641124Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:30:44 old-k8s-version-136598 crio[650]: time="2025-10-18T09:30:44.313285088Z" level=info msg="Created container 4b8a746cb4e9fe3d6643315df4c667c90b518acd180e1c16a65a071f5685c2b4: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fx8gc/dashboard-metrics-scraper" id=6788bbad-75e6-4831-a41a-618da1f41b43 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:30:44 old-k8s-version-136598 crio[650]: time="2025-10-18T09:30:44.314039922Z" level=info msg="Starting container: 4b8a746cb4e9fe3d6643315df4c667c90b518acd180e1c16a65a071f5685c2b4" id=7cd28d9a-9414-4cf8-9bb1-24bf0edbba2d name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:30:44 old-k8s-version-136598 crio[650]: time="2025-10-18T09:30:44.318707706Z" level=info msg="Started container" PID=1654 containerID=4b8a746cb4e9fe3d6643315df4c667c90b518acd180e1c16a65a071f5685c2b4 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fx8gc/dashboard-metrics-scraper id=7cd28d9a-9414-4cf8-9bb1-24bf0edbba2d name=/runtime.v1.RuntimeService/StartContainer sandboxID=0c9b4ef92d06d71177a5436d9c71631d78ab7ad124b2510a64f289410750e502
	Oct 18 09:30:44 old-k8s-version-136598 conmon[1652]: conmon 4b8a746cb4e9fe3d6643 <ninfo>: container 1654 exited with status 1
	Oct 18 09:30:44 old-k8s-version-136598 crio[650]: time="2025-10-18T09:30:44.521342387Z" level=info msg="Removing container: 49bfd15157cd8161028f63d5795cc44b394da26c576503339cdac4ecb3345f08" id=63882834-b0dd-4701-b4b4-d01c1b09e8f8 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:30:44 old-k8s-version-136598 crio[650]: time="2025-10-18T09:30:44.534584357Z" level=info msg="Error loading conmon cgroup of container 49bfd15157cd8161028f63d5795cc44b394da26c576503339cdac4ecb3345f08: cgroup deleted" id=63882834-b0dd-4701-b4b4-d01c1b09e8f8 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:30:44 old-k8s-version-136598 crio[650]: time="2025-10-18T09:30:44.540734586Z" level=info msg="Removed container 49bfd15157cd8161028f63d5795cc44b394da26c576503339cdac4ecb3345f08: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fx8gc/dashboard-metrics-scraper" id=63882834-b0dd-4701-b4b4-d01c1b09e8f8 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:30:48 old-k8s-version-136598 crio[650]: time="2025-10-18T09:30:48.01527679Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:30:48 old-k8s-version-136598 crio[650]: time="2025-10-18T09:30:48.020552179Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:30:48 old-k8s-version-136598 crio[650]: time="2025-10-18T09:30:48.020755579Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:30:48 old-k8s-version-136598 crio[650]: time="2025-10-18T09:30:48.020848664Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:30:48 old-k8s-version-136598 crio[650]: time="2025-10-18T09:30:48.025445032Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:30:48 old-k8s-version-136598 crio[650]: time="2025-10-18T09:30:48.02547996Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:30:48 old-k8s-version-136598 crio[650]: time="2025-10-18T09:30:48.025501113Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:30:48 old-k8s-version-136598 crio[650]: time="2025-10-18T09:30:48.033787224Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:30:48 old-k8s-version-136598 crio[650]: time="2025-10-18T09:30:48.033823269Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:30:48 old-k8s-version-136598 crio[650]: time="2025-10-18T09:30:48.033845766Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:30:48 old-k8s-version-136598 crio[650]: time="2025-10-18T09:30:48.053515471Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:30:48 old-k8s-version-136598 crio[650]: time="2025-10-18T09:30:48.053561386Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:30:48 old-k8s-version-136598 crio[650]: time="2025-10-18T09:30:48.053586591Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:30:48 old-k8s-version-136598 crio[650]: time="2025-10-18T09:30:48.058763941Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:30:48 old-k8s-version-136598 crio[650]: time="2025-10-18T09:30:48.05880878Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	4b8a746cb4e9f       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           15 seconds ago      Exited              dashboard-metrics-scraper   2                   0c9b4ef92d06d       dashboard-metrics-scraper-5f989dc9cf-fx8gc       kubernetes-dashboard
	8386a4b778835       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           21 seconds ago      Running             storage-provisioner         2                   f3d61f916bf34       storage-provisioner                              kube-system
	e5e655d2337a2       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   30 seconds ago      Running             kubernetes-dashboard        0                   fd6e4146beb1f       kubernetes-dashboard-8694d4445c-c2x4c            kubernetes-dashboard
	5ddf89e58fd02       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           52 seconds ago      Running             coredns                     1                   8d5dd981e449a       coredns-5dd5756b68-6ldkv                         kube-system
	ab92a1b3547c4       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           52 seconds ago      Running             busybox                     1                   05f636ca35362       busybox                                          default
	4ed79a13689cc       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           52 seconds ago      Exited              storage-provisioner         1                   f3d61f916bf34       storage-provisioner                              kube-system
	6bc656f23e97f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           52 seconds ago      Running             kindnet-cni                 1                   ce4fd0dfeb0bd       kindnet-zff87                                    kube-system
	83f2dfcd2a9a0       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           52 seconds ago      Running             kube-proxy                  1                   734d45296b945       kube-proxy-9pwdq                                 kube-system
	eeff6e8a78250       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           58 seconds ago      Running             kube-apiserver              1                   35750f0815675       kube-apiserver-old-k8s-version-136598            kube-system
	4744dbee055e1       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           58 seconds ago      Running             kube-controller-manager     1                   16e0cc36726ab       kube-controller-manager-old-k8s-version-136598   kube-system
	f32a0ff785525       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           58 seconds ago      Running             etcd                        1                   e8f50dd31e190       etcd-old-k8s-version-136598                      kube-system
	8eda135cfc037       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           58 seconds ago      Running             kube-scheduler              1                   bfda95d10dd61       kube-scheduler-old-k8s-version-136598            kube-system
	
	
	==> coredns [5ddf89e58fd0231dcb686b1ec34b78eba7c7382651d2a6669db94b08a351ad8f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50177 - 22621 "HINFO IN 4422825209495748244.4771825006227619632. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.032205766s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-136598
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-136598
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820
	                    minikube.k8s.io/name=old-k8s-version-136598
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_29_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:28:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-136598
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:30:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:30:37 +0000   Sat, 18 Oct 2025 09:28:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:30:37 +0000   Sat, 18 Oct 2025 09:28:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:30:37 +0000   Sat, 18 Oct 2025 09:28:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:30:37 +0000   Sat, 18 Oct 2025 09:29:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-136598
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                2ec042d2-4e94-4d4b-a1d0-dda9032068a7
	  Boot ID:                    65523ab2-bf15-4ba4-9086-da57024e96a9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-5dd5756b68-6ldkv                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     107s
	  kube-system                 etcd-old-k8s-version-136598                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m1s
	  kube-system                 kindnet-zff87                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-old-k8s-version-136598             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-controller-manager-old-k8s-version-136598    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-proxy-9pwdq                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-old-k8s-version-136598             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-fx8gc        0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-c2x4c             0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 106s               kube-proxy       
	  Normal  Starting                 52s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m1s               kubelet          Node old-k8s-version-136598 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m1s               kubelet          Node old-k8s-version-136598 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m1s               kubelet          Node old-k8s-version-136598 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m1s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           108s               node-controller  Node old-k8s-version-136598 event: Registered Node old-k8s-version-136598 in Controller
	  Normal  NodeReady                94s                kubelet          Node old-k8s-version-136598 status is now: NodeReady
	  Normal  Starting                 59s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)  kubelet          Node old-k8s-version-136598 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)  kubelet          Node old-k8s-version-136598 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 59s)  kubelet          Node old-k8s-version-136598 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           41s                node-controller  Node old-k8s-version-136598 event: Registered Node old-k8s-version-136598 in Controller
	
	
	==> dmesg <==
	[Oct18 09:07] overlayfs: idmapped layers are currently not supported
	[ +35.005632] overlayfs: idmapped layers are currently not supported
	[Oct18 09:08] overlayfs: idmapped layers are currently not supported
	[Oct18 09:10] overlayfs: idmapped layers are currently not supported
	[Oct18 09:11] overlayfs: idmapped layers are currently not supported
	[Oct18 09:12] overlayfs: idmapped layers are currently not supported
	[Oct18 09:13] overlayfs: idmapped layers are currently not supported
	[  +9.741593] overlayfs: idmapped layers are currently not supported
	[Oct18 09:14] overlayfs: idmapped layers are currently not supported
	[ +29.155522] overlayfs: idmapped layers are currently not supported
	[Oct18 09:15] overlayfs: idmapped layers are currently not supported
	[ +11.661984] kauditd_printk_skb: 8 callbacks suppressed
	[ +12.494912] overlayfs: idmapped layers are currently not supported
	[Oct18 09:16] overlayfs: idmapped layers are currently not supported
	[Oct18 09:18] overlayfs: idmapped layers are currently not supported
	[Oct18 09:20] overlayfs: idmapped layers are currently not supported
	[  +1.396393] overlayfs: idmapped layers are currently not supported
	[Oct18 09:22] overlayfs: idmapped layers are currently not supported
	[ +27.782946] overlayfs: idmapped layers are currently not supported
	[Oct18 09:25] overlayfs: idmapped layers are currently not supported
	[Oct18 09:26] overlayfs: idmapped layers are currently not supported
	[Oct18 09:27] overlayfs: idmapped layers are currently not supported
	[Oct18 09:28] overlayfs: idmapped layers are currently not supported
	[ +34.309418] overlayfs: idmapped layers are currently not supported
	[Oct18 09:30] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [f32a0ff7855256e0fb8bf0a5004fc3ea08393ad5781721d98ea878404aa56ba5] <==
	{"level":"info","ts":"2025-10-18T09:30:02.120223Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-18T09:30:02.123502Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-10-18T09:30:02.128663Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-10-18T09:30:02.129368Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-18T09:30:02.129701Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T09:30:02.129769Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T09:30:02.141455Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-18T09:30:02.150805Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-18T09:30:02.151Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-18T09:30:02.151881Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-18T09:30:02.151832Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-18T09:30:03.092051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-18T09:30:03.092105Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-18T09:30:03.092123Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-18T09:30:03.092136Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-10-18T09:30:03.092142Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-18T09:30:03.092152Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-10-18T09:30:03.09216Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-18T09:30:03.098645Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-136598 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-18T09:30:03.098694Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T09:30:03.100425Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T09:30:03.109585Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-18T09:30:03.109958Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-18T09:30:03.147912Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-18T09:30:03.148008Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 09:31:00 up 11:13,  0 user,  load average: 2.20, 2.78, 2.42
	Linux old-k8s-version-136598 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6bc656f23e97f2a26e96b76c07156bef906ac7390f71c3992b639108a73b3b77] <==
	I1018 09:30:07.816589       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:30:07.816800       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 09:30:07.816931       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:30:07.816942       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:30:07.816952       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:30:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:30:08.013166       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:30:08.013184       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:30:08.013194       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:30:08.014387       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 09:30:38.013788       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 09:30:38.013802       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 09:30:38.015097       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1018 09:30:38.015110       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1018 09:30:39.614028       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:30:39.614155       1 metrics.go:72] Registering metrics
	I1018 09:30:39.614245       1 controller.go:711] "Syncing nftables rules"
	I1018 09:30:48.014806       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:30:48.014988       1 main.go:301] handling current node
	I1018 09:30:58.022495       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:30:58.022528       1 main.go:301] handling current node
	
	
	==> kube-apiserver [eeff6e8a782500b5b3c99df3f19a42ccbe900a5e69acb513548515106b6b820b] <==
	I1018 09:30:06.544655       1 aggregator.go:166] initial CRD sync complete...
	I1018 09:30:06.544671       1 autoregister_controller.go:141] Starting autoregister controller
	I1018 09:30:06.544678       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 09:30:06.544684       1 cache.go:39] Caches are synced for autoregister controller
	I1018 09:30:06.578444       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1018 09:30:06.579338       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1018 09:30:06.579402       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	E1018 09:30:06.579925       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high"] items=[{},{},{},{},{},{},{},{}]
	E1018 09:30:06.590810       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 09:30:07.332215       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:30:08.282336       1 controller.go:624] quota admission added evaluator for: namespaces
	I1018 09:30:08.327487       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1018 09:30:08.364671       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:30:08.374143       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:30:08.396305       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1018 09:30:08.480588       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.29.166"}
	I1018 09:30:08.505927       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.98.125"}
	E1018 09:30:16.579644       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","global-default","leader-election","node-high","system","workload-high","workload-low","catch-all"] items=[{},{},{},{},{},{},{},{}]
	I1018 09:30:19.163108       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1018 09:30:19.195936       1 controller.go:624] quota admission added evaluator for: endpoints
	I1018 09:30:19.205520       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1018 09:30:26.581417       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["leader-election","node-high","system","workload-high","workload-low","catch-all","exempt","global-default"] items=[{},{},{},{},{},{},{},{}]
	E1018 09:30:36.581838       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["leader-election","node-high","system","workload-high","workload-low","catch-all","exempt","global-default"] items=[{},{},{},{},{},{},{},{}]
	E1018 09:30:46.582920       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["leader-election","node-high","system","workload-high","workload-low","catch-all","exempt","global-default"] items=[{},{},{},{},{},{},{},{}]
	E1018 09:30:56.586128       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","system","workload-high","workload-low","catch-all","exempt","global-default","leader-election"] items=[{},{},{},{},{},{},{},{}]
	
	
	==> kube-controller-manager [4744dbee055e10babc2fda11917557a08f5b523b1ba26af96a30f9d1e4200027] <==
	I1018 09:30:19.280156       1 taint_manager.go:211] "Sending events to api server"
	I1018 09:30:19.280405       1 event.go:307] "Event occurred" object="old-k8s-version-136598" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-136598 event: Registered Node old-k8s-version-136598 in Controller"
	I1018 09:30:19.282961       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1018 09:30:19.287410       1 shared_informer.go:318] Caches are synced for crt configmap
	I1018 09:30:19.289617       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="87.874429ms"
	I1018 09:30:19.302152       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.48497ms"
	I1018 09:30:19.302241       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="40.319µs"
	I1018 09:30:19.321513       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="44.874822ms"
	I1018 09:30:19.333202       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="79.94µs"
	I1018 09:30:19.335449       1 shared_informer.go:318] Caches are synced for resource quota
	I1018 09:30:19.349632       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="28.06788ms"
	I1018 09:30:19.349828       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="76.068µs"
	I1018 09:30:19.391148       1 shared_informer.go:318] Caches are synced for resource quota
	I1018 09:30:19.721308       1 shared_informer.go:318] Caches are synced for garbage collector
	I1018 09:30:19.721346       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1018 09:30:19.749692       1 shared_informer.go:318] Caches are synced for garbage collector
	I1018 09:30:24.457394       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="97.753µs"
	I1018 09:30:25.463427       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="147.795µs"
	I1018 09:30:26.500535       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="112.235µs"
	I1018 09:30:30.505217       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="11.915902ms"
	I1018 09:30:30.505487       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="64.827µs"
	I1018 09:30:41.235733       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.826668ms"
	I1018 09:30:41.235915       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="65.007µs"
	I1018 09:30:44.549556       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="49.771µs"
	I1018 09:30:49.596382       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="72.868µs"
	
	
	==> kube-proxy [83f2dfcd2a9a025f2474d147b1054078bec3a16567dfd26d3cf9d202de3cda59] <==
	I1018 09:30:07.882967       1 server_others.go:69] "Using iptables proxy"
	I1018 09:30:07.912250       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1018 09:30:08.048628       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:30:08.050690       1 server_others.go:152] "Using iptables Proxier"
	I1018 09:30:08.050817       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1018 09:30:08.050851       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1018 09:30:08.050906       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1018 09:30:08.051160       1 server.go:846] "Version info" version="v1.28.0"
	I1018 09:30:08.051380       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:30:08.052199       1 config.go:188] "Starting service config controller"
	I1018 09:30:08.052278       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1018 09:30:08.052336       1 config.go:97] "Starting endpoint slice config controller"
	I1018 09:30:08.052371       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1018 09:30:08.052864       1 config.go:315] "Starting node config controller"
	I1018 09:30:08.053956       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1018 09:30:08.152668       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1018 09:30:08.152711       1 shared_informer.go:318] Caches are synced for service config
	I1018 09:30:08.154132       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [8eda135cfc037fbd8f05a4e7cfe080a910c91820b87edf4d7370b9af44b3bbc5] <==
	I1018 09:30:04.720310       1 serving.go:348] Generated self-signed cert in-memory
	W1018 09:30:06.332908       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 09:30:06.332940       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 09:30:06.332949       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 09:30:06.332959       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 09:30:06.502712       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1018 09:30:06.502750       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:30:06.504855       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1018 09:30:06.511277       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:30:06.511463       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1018 09:30:06.511493       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1018 09:30:06.611922       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 18 09:30:19 old-k8s-version-136598 kubelet[779]: I1018 09:30:19.372118     779 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e97c2dca-e2c5-4d41-8dcc-b60fda13fea8-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-c2x4c\" (UID: \"e97c2dca-e2c5-4d41-8dcc-b60fda13fea8\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-c2x4c"
	Oct 18 09:30:19 old-k8s-version-136598 kubelet[779]: I1018 09:30:19.372347     779 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f6d26dd0-d0cb-4d7c-9c28-5979bac7befa-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-fx8gc\" (UID: \"f6d26dd0-d0cb-4d7c-9c28-5979bac7befa\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fx8gc"
	Oct 18 09:30:19 old-k8s-version-136598 kubelet[779]: I1018 09:30:19.372446     779 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxnpb\" (UniqueName: \"kubernetes.io/projected/e97c2dca-e2c5-4d41-8dcc-b60fda13fea8-kube-api-access-lxnpb\") pod \"kubernetes-dashboard-8694d4445c-c2x4c\" (UID: \"e97c2dca-e2c5-4d41-8dcc-b60fda13fea8\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-c2x4c"
	Oct 18 09:30:19 old-k8s-version-136598 kubelet[779]: I1018 09:30:19.372549     779 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5vb6\" (UniqueName: \"kubernetes.io/projected/f6d26dd0-d0cb-4d7c-9c28-5979bac7befa-kube-api-access-c5vb6\") pod \"dashboard-metrics-scraper-5f989dc9cf-fx8gc\" (UID: \"f6d26dd0-d0cb-4d7c-9c28-5979bac7befa\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fx8gc"
	Oct 18 09:30:19 old-k8s-version-136598 kubelet[779]: W1018 09:30:19.602411     779 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/396852f7b3ff57804dec4cde35210d902d03238e0cea7fff525cafcc0f4703cf/crio-0c9b4ef92d06d71177a5436d9c71631d78ab7ad124b2510a64f289410750e502 WatchSource:0}: Error finding container 0c9b4ef92d06d71177a5436d9c71631d78ab7ad124b2510a64f289410750e502: Status 404 returned error can't find the container with id 0c9b4ef92d06d71177a5436d9c71631d78ab7ad124b2510a64f289410750e502
	Oct 18 09:30:19 old-k8s-version-136598 kubelet[779]: W1018 09:30:19.621645     779 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/396852f7b3ff57804dec4cde35210d902d03238e0cea7fff525cafcc0f4703cf/crio-fd6e4146beb1f362dbcc50e121fe618f14c871f369856970fba090ecd01ca0f9 WatchSource:0}: Error finding container fd6e4146beb1f362dbcc50e121fe618f14c871f369856970fba090ecd01ca0f9: Status 404 returned error can't find the container with id fd6e4146beb1f362dbcc50e121fe618f14c871f369856970fba090ecd01ca0f9
	Oct 18 09:30:24 old-k8s-version-136598 kubelet[779]: I1018 09:30:24.440790     779 scope.go:117] "RemoveContainer" containerID="920279401f75960f18abcbbe4a10d256b29167c101d1e79850cd8e8735b37ab2"
	Oct 18 09:30:25 old-k8s-version-136598 kubelet[779]: I1018 09:30:25.446081     779 scope.go:117] "RemoveContainer" containerID="920279401f75960f18abcbbe4a10d256b29167c101d1e79850cd8e8735b37ab2"
	Oct 18 09:30:25 old-k8s-version-136598 kubelet[779]: I1018 09:30:25.446904     779 scope.go:117] "RemoveContainer" containerID="49bfd15157cd8161028f63d5795cc44b394da26c576503339cdac4ecb3345f08"
	Oct 18 09:30:25 old-k8s-version-136598 kubelet[779]: E1018 09:30:25.447175     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fx8gc_kubernetes-dashboard(f6d26dd0-d0cb-4d7c-9c28-5979bac7befa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fx8gc" podUID="f6d26dd0-d0cb-4d7c-9c28-5979bac7befa"
	Oct 18 09:30:26 old-k8s-version-136598 kubelet[779]: I1018 09:30:26.458803     779 scope.go:117] "RemoveContainer" containerID="49bfd15157cd8161028f63d5795cc44b394da26c576503339cdac4ecb3345f08"
	Oct 18 09:30:26 old-k8s-version-136598 kubelet[779]: E1018 09:30:26.459080     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fx8gc_kubernetes-dashboard(f6d26dd0-d0cb-4d7c-9c28-5979bac7befa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fx8gc" podUID="f6d26dd0-d0cb-4d7c-9c28-5979bac7befa"
	Oct 18 09:30:29 old-k8s-version-136598 kubelet[779]: I1018 09:30:29.573978     779 scope.go:117] "RemoveContainer" containerID="49bfd15157cd8161028f63d5795cc44b394da26c576503339cdac4ecb3345f08"
	Oct 18 09:30:29 old-k8s-version-136598 kubelet[779]: E1018 09:30:29.574307     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fx8gc_kubernetes-dashboard(f6d26dd0-d0cb-4d7c-9c28-5979bac7befa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fx8gc" podUID="f6d26dd0-d0cb-4d7c-9c28-5979bac7befa"
	Oct 18 09:30:38 old-k8s-version-136598 kubelet[779]: I1018 09:30:38.497880     779 scope.go:117] "RemoveContainer" containerID="4ed79a13689cc2b55515752533f3bc48fdf545aa3639d366a9d3b722274b9426"
	Oct 18 09:30:38 old-k8s-version-136598 kubelet[779]: I1018 09:30:38.525119     779 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-c2x4c" podStartSLOduration=9.458979574 podCreationTimestamp="2025-10-18 09:30:19 +0000 UTC" firstStartedPulling="2025-10-18 09:30:19.625250467 +0000 UTC m=+18.618161454" lastFinishedPulling="2025-10-18 09:30:29.691323133 +0000 UTC m=+28.684234128" observedRunningTime="2025-10-18 09:30:30.492864062 +0000 UTC m=+29.485775049" watchObservedRunningTime="2025-10-18 09:30:38.525052248 +0000 UTC m=+37.517963234"
	Oct 18 09:30:44 old-k8s-version-136598 kubelet[779]: I1018 09:30:44.276632     779 scope.go:117] "RemoveContainer" containerID="49bfd15157cd8161028f63d5795cc44b394da26c576503339cdac4ecb3345f08"
	Oct 18 09:30:44 old-k8s-version-136598 kubelet[779]: I1018 09:30:44.515047     779 scope.go:117] "RemoveContainer" containerID="49bfd15157cd8161028f63d5795cc44b394da26c576503339cdac4ecb3345f08"
	Oct 18 09:30:44 old-k8s-version-136598 kubelet[779]: I1018 09:30:44.515492     779 scope.go:117] "RemoveContainer" containerID="4b8a746cb4e9fe3d6643315df4c667c90b518acd180e1c16a65a071f5685c2b4"
	Oct 18 09:30:44 old-k8s-version-136598 kubelet[779]: E1018 09:30:44.515932     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fx8gc_kubernetes-dashboard(f6d26dd0-d0cb-4d7c-9c28-5979bac7befa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fx8gc" podUID="f6d26dd0-d0cb-4d7c-9c28-5979bac7befa"
	Oct 18 09:30:49 old-k8s-version-136598 kubelet[779]: I1018 09:30:49.573033     779 scope.go:117] "RemoveContainer" containerID="4b8a746cb4e9fe3d6643315df4c667c90b518acd180e1c16a65a071f5685c2b4"
	Oct 18 09:30:49 old-k8s-version-136598 kubelet[779]: E1018 09:30:49.574042     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fx8gc_kubernetes-dashboard(f6d26dd0-d0cb-4d7c-9c28-5979bac7befa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fx8gc" podUID="f6d26dd0-d0cb-4d7c-9c28-5979bac7befa"
	Oct 18 09:30:56 old-k8s-version-136598 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 09:30:56 old-k8s-version-136598 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 09:30:56 old-k8s-version-136598 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [e5e655d2337a235dfde07e733ece6522bca028878923664cb1aa29ce8e0720ec] <==
	2025/10/18 09:30:29 Starting overwatch
	2025/10/18 09:30:29 Using namespace: kubernetes-dashboard
	2025/10/18 09:30:29 Using in-cluster config to connect to apiserver
	2025/10/18 09:30:29 Using secret token for csrf signing
	2025/10/18 09:30:29 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 09:30:29 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 09:30:29 Successful initial request to the apiserver, version: v1.28.0
	2025/10/18 09:30:29 Generating JWE encryption key
	2025/10/18 09:30:29 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 09:30:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 09:30:30 Initializing JWE encryption key from synchronized object
	2025/10/18 09:30:30 Creating in-cluster Sidecar client
	2025/10/18 09:30:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 09:30:30 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [4ed79a13689cc2b55515752533f3bc48fdf545aa3639d366a9d3b722274b9426] <==
	I1018 09:30:07.794951       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 09:30:37.800106       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [8386a4b7788357740c93bd64a1e18437f5db13598342e846f345da3ae2796669] <==
	I1018 09:30:38.547568       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 09:30:38.560423       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 09:30:38.560536       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1018 09:30:55.965503       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 09:30:55.965969       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fc909686-371c-405c-ab2b-9cef06488b3d", APIVersion:"v1", ResourceVersion:"632", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-136598_4382fcbc-b3f9-4387-bfbc-3c0b56091c86 became leader
	I1018 09:30:55.966046       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-136598_4382fcbc-b3f9-4387-bfbc-3c0b56091c86!
	I1018 09:30:56.066221       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-136598_4382fcbc-b3f9-4387-bfbc-3c0b56091c86!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-136598 -n old-k8s-version-136598
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-136598 -n old-k8s-version-136598: exit status 2 (361.134949ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-136598 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-136598
helpers_test.go:243: (dbg) docker inspect old-k8s-version-136598:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "396852f7b3ff57804dec4cde35210d902d03238e0cea7fff525cafcc0f4703cf",
	        "Created": "2025-10-18T09:28:36.683322169Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1456902,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:29:53.989711946Z",
	            "FinishedAt": "2025-10-18T09:29:53.188374573Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/396852f7b3ff57804dec4cde35210d902d03238e0cea7fff525cafcc0f4703cf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/396852f7b3ff57804dec4cde35210d902d03238e0cea7fff525cafcc0f4703cf/hostname",
	        "HostsPath": "/var/lib/docker/containers/396852f7b3ff57804dec4cde35210d902d03238e0cea7fff525cafcc0f4703cf/hosts",
	        "LogPath": "/var/lib/docker/containers/396852f7b3ff57804dec4cde35210d902d03238e0cea7fff525cafcc0f4703cf/396852f7b3ff57804dec4cde35210d902d03238e0cea7fff525cafcc0f4703cf-json.log",
	        "Name": "/old-k8s-version-136598",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-136598:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-136598",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "396852f7b3ff57804dec4cde35210d902d03238e0cea7fff525cafcc0f4703cf",
	                "LowerDir": "/var/lib/docker/overlay2/84864c1490f58ee53ad331d94af688b88ad3a7e940f19ffed23dd53ccaba1716-init/diff:/var/lib/docker/overlay2/60519750c2737db0dfdb37adf468f6414129c2b5dcb7218ea59afbc63bfd537f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/84864c1490f58ee53ad331d94af688b88ad3a7e940f19ffed23dd53ccaba1716/merged",
	                "UpperDir": "/var/lib/docker/overlay2/84864c1490f58ee53ad331d94af688b88ad3a7e940f19ffed23dd53ccaba1716/diff",
	                "WorkDir": "/var/lib/docker/overlay2/84864c1490f58ee53ad331d94af688b88ad3a7e940f19ffed23dd53ccaba1716/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-136598",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-136598/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-136598",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-136598",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-136598",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "be8e48474839a8f344189531340360bed972b402e117bb71f190aaae67413002",
	            "SandboxKey": "/var/run/docker/netns/be8e48474839",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34876"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34877"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34880"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34878"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34879"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-136598": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:01:c3:3e:3e:60",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0ac75cd444c8d84d3c10418ebf74369e7543fa159203a9e520092b626fcf4011",
	                    "EndpointID": "6d3698a955d35c8f6f3f95b0018cb6cccadad918d6d7c23487db948bf663ec09",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-136598",
	                        "396852f7b3ff"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-136598 -n old-k8s-version-136598
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-136598 -n old-k8s-version-136598: exit status 2 (346.30415ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-136598 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-136598 logs -n 25: (1.571438058s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-275703 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-275703             │ jenkins │ v1.37.0 │ 18 Oct 25 09:26 UTC │                     │
	│ ssh     │ -p cilium-275703 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-275703             │ jenkins │ v1.37.0 │ 18 Oct 25 09:26 UTC │                     │
	│ ssh     │ -p cilium-275703 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-275703             │ jenkins │ v1.37.0 │ 18 Oct 25 09:26 UTC │                     │
	│ ssh     │ -p cilium-275703 sudo crio config                                                                                                                                                                                                             │ cilium-275703             │ jenkins │ v1.37.0 │ 18 Oct 25 09:26 UTC │                     │
	│ delete  │ -p cilium-275703                                                                                                                                                                                                                              │ cilium-275703             │ jenkins │ v1.37.0 │ 18 Oct 25 09:26 UTC │ 18 Oct 25 09:26 UTC │
	│ start   │ -p force-systemd-env-406177 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-406177  │ jenkins │ v1.37.0 │ 18 Oct 25 09:26 UTC │ 18 Oct 25 09:26 UTC │
	│ delete  │ -p force-systemd-env-406177                                                                                                                                                                                                                   │ force-systemd-env-406177  │ jenkins │ v1.37.0 │ 18 Oct 25 09:26 UTC │ 18 Oct 25 09:26 UTC │
	│ start   │ -p cert-expiration-854768 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-854768    │ jenkins │ v1.37.0 │ 18 Oct 25 09:26 UTC │ 18 Oct 25 09:27 UTC │
	│ start   │ -p kubernetes-upgrade-757858 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-757858 │ jenkins │ v1.37.0 │ 18 Oct 25 09:27 UTC │                     │
	│ start   │ -p kubernetes-upgrade-757858 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-757858 │ jenkins │ v1.37.0 │ 18 Oct 25 09:27 UTC │ 18 Oct 25 09:27 UTC │
	│ delete  │ -p kubernetes-upgrade-757858                                                                                                                                                                                                                  │ kubernetes-upgrade-757858 │ jenkins │ v1.37.0 │ 18 Oct 25 09:27 UTC │ 18 Oct 25 09:27 UTC │
	│ start   │ -p cert-options-783705 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-783705       │ jenkins │ v1.37.0 │ 18 Oct 25 09:27 UTC │ 18 Oct 25 09:28 UTC │
	│ ssh     │ cert-options-783705 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-783705       │ jenkins │ v1.37.0 │ 18 Oct 25 09:28 UTC │ 18 Oct 25 09:28 UTC │
	│ ssh     │ -p cert-options-783705 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-783705       │ jenkins │ v1.37.0 │ 18 Oct 25 09:28 UTC │ 18 Oct 25 09:28 UTC │
	│ delete  │ -p cert-options-783705                                                                                                                                                                                                                        │ cert-options-783705       │ jenkins │ v1.37.0 │ 18 Oct 25 09:28 UTC │ 18 Oct 25 09:28 UTC │
	│ start   │ -p old-k8s-version-136598 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-136598    │ jenkins │ v1.37.0 │ 18 Oct 25 09:28 UTC │ 18 Oct 25 09:29 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-136598 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-136598    │ jenkins │ v1.37.0 │ 18 Oct 25 09:29 UTC │                     │
	│ stop    │ -p old-k8s-version-136598 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-136598    │ jenkins │ v1.37.0 │ 18 Oct 25 09:29 UTC │ 18 Oct 25 09:29 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-136598 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-136598    │ jenkins │ v1.37.0 │ 18 Oct 25 09:29 UTC │ 18 Oct 25 09:29 UTC │
	│ start   │ -p old-k8s-version-136598 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-136598    │ jenkins │ v1.37.0 │ 18 Oct 25 09:29 UTC │ 18 Oct 25 09:30 UTC │
	│ start   │ -p cert-expiration-854768 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-854768    │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │ 18 Oct 25 09:30 UTC │
	│ delete  │ -p cert-expiration-854768                                                                                                                                                                                                                     │ cert-expiration-854768    │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │ 18 Oct 25 09:30 UTC │
	│ image   │ old-k8s-version-136598 image list --format=json                                                                                                                                                                                               │ old-k8s-version-136598    │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │ 18 Oct 25 09:30 UTC │
	│ pause   │ -p old-k8s-version-136598 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-136598    │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │                     │
	│ start   │ -p no-preload-886951 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-886951         │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:30:56
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:30:56.115725 1460427 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:30:56.115988 1460427 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:30:56.116001 1460427 out.go:374] Setting ErrFile to fd 2...
	I1018 09:30:56.116007 1460427 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:30:56.116306 1460427 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 09:30:56.116822 1460427 out.go:368] Setting JSON to false
	I1018 09:30:56.118493 1460427 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":40404,"bootTime":1760739453,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1018 09:30:56.118580 1460427 start.go:141] virtualization:  
	I1018 09:30:56.122464 1460427 out.go:179] * [no-preload-886951] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 09:30:56.125554 1460427 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 09:30:56.125745 1460427 notify.go:220] Checking for updates...
	I1018 09:30:56.131626 1460427 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:30:56.135449 1460427 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 09:30:56.138356 1460427 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-1274243/.minikube
	I1018 09:30:56.141330 1460427 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 09:30:56.144194 1460427 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:30:56.148395 1460427 config.go:182] Loaded profile config "old-k8s-version-136598": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 09:30:56.148519 1460427 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:30:56.182681 1460427 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 09:30:56.183494 1460427 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:30:56.271872 1460427 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 09:30:56.259714037 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:30:56.271982 1460427 docker.go:318] overlay module found
	I1018 09:30:56.275227 1460427 out.go:179] * Using the docker driver based on user configuration
	I1018 09:30:56.278208 1460427 start.go:305] selected driver: docker
	I1018 09:30:56.278226 1460427 start.go:925] validating driver "docker" against <nil>
	I1018 09:30:56.278246 1460427 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:30:56.278912 1460427 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:30:56.391219 1460427 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 09:30:56.382075111 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:30:56.391378 1460427 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 09:30:56.392935 1460427 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:30:56.396106 1460427 out.go:179] * Using Docker driver with root privileges
	I1018 09:30:56.399078 1460427 cni.go:84] Creating CNI manager for ""
	I1018 09:30:56.399154 1460427 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:30:56.399169 1460427 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 09:30:56.399251 1460427 start.go:349] cluster config:
	{Name:no-preload-886951 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-886951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:30:56.402340 1460427 out.go:179] * Starting "no-preload-886951" primary control-plane node in "no-preload-886951" cluster
	I1018 09:30:56.405135 1460427 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:30:56.408021 1460427 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:30:56.410864 1460427 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:30:56.411017 1460427 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/config.json ...
	I1018 09:30:56.411058 1460427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/config.json: {Name:mk60553321cd9c490bd7767b79255ad2bc4ad3b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:30:56.411271 1460427 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:30:56.411517 1460427 cache.go:107] acquiring lock: {Name:mkaa43f9374ace13fbeea7697fbebfe03a59b228 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:30:56.411583 1460427 cache.go:115] /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1018 09:30:56.411597 1460427 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 89.491µs
	I1018 09:30:56.411611 1460427 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1018 09:30:56.411622 1460427 cache.go:107] acquiring lock: {Name:mkbebba4bc705d659ee66bc0af56d117598bf518 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:30:56.411704 1460427 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1018 09:30:56.412101 1460427 cache.go:107] acquiring lock: {Name:mk55ca2130ad8720b5d4e30a3e3aca89f3adaf85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:30:56.412200 1460427 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 09:30:56.412434 1460427 cache.go:107] acquiring lock: {Name:mk181c56341c6ab3c8b820245c38e1f457dfcfbe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:30:56.412563 1460427 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1018 09:30:56.412783 1460427 cache.go:107] acquiring lock: {Name:mk23edb8e930744ec07884b432879c4ea00b2405 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:30:56.412879 1460427 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1018 09:30:56.413117 1460427 cache.go:107] acquiring lock: {Name:mk8d3760b83fd8a7218910885f73a4559e163755 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:30:56.413207 1460427 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1018 09:30:56.413449 1460427 cache.go:107] acquiring lock: {Name:mkccda2c66e79badbf58f1b3c791a60ea2d0dd4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:30:56.413600 1460427 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1018 09:30:56.413825 1460427 cache.go:107] acquiring lock: {Name:mk3f05ac3a6df0aaf5c01de1c3278a44e71a1ede Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:30:56.413936 1460427 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1018 09:30:56.417962 1460427 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1018 09:30:56.418600 1460427 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 09:30:56.418841 1460427 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1018 09:30:56.419039 1460427 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1018 09:30:56.419269 1460427 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1018 09:30:56.419449 1460427 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1018 09:30:56.419633 1460427 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1018 09:30:56.445399 1460427 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:30:56.445428 1460427 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:30:56.445442 1460427 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:30:56.445463 1460427 start.go:360] acquireMachinesLock for no-preload-886951: {Name:mk1b35ce5d45058835b57539f98f93aa21da27b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:30:56.445565 1460427 start.go:364] duration metric: took 82.107µs to acquireMachinesLock for "no-preload-886951"
	I1018 09:30:56.445604 1460427 start.go:93] Provisioning new machine with config: &{Name:no-preload-886951 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-886951 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:30:56.445669 1460427 start.go:125] createHost starting for "" (driver="docker")
	I1018 09:30:56.453205 1460427 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 09:30:56.453443 1460427 start.go:159] libmachine.API.Create for "no-preload-886951" (driver="docker")
	I1018 09:30:56.453479 1460427 client.go:168] LocalClient.Create starting
	I1018 09:30:56.453557 1460427 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem
	I1018 09:30:56.453594 1460427 main.go:141] libmachine: Decoding PEM data...
	I1018 09:30:56.453611 1460427 main.go:141] libmachine: Parsing certificate...
	I1018 09:30:56.453667 1460427 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem
	I1018 09:30:56.453698 1460427 main.go:141] libmachine: Decoding PEM data...
	I1018 09:30:56.453713 1460427 main.go:141] libmachine: Parsing certificate...
	I1018 09:30:56.454091 1460427 cli_runner.go:164] Run: docker network inspect no-preload-886951 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 09:30:56.489793 1460427 cli_runner.go:211] docker network inspect no-preload-886951 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 09:30:56.489874 1460427 network_create.go:284] running [docker network inspect no-preload-886951] to gather additional debugging logs...
	I1018 09:30:56.489891 1460427 cli_runner.go:164] Run: docker network inspect no-preload-886951
	W1018 09:30:56.507211 1460427 cli_runner.go:211] docker network inspect no-preload-886951 returned with exit code 1
	I1018 09:30:56.507241 1460427 network_create.go:287] error running [docker network inspect no-preload-886951]: docker network inspect no-preload-886951: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-886951 not found
	I1018 09:30:56.507254 1460427 network_create.go:289] output of [docker network inspect no-preload-886951]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-886951 not found
	
	** /stderr **
	I1018 09:30:56.507344 1460427 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:30:56.529695 1460427 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-521f8f572997 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3a:7e:e5:c0:67:29} reservation:<nil>}
	I1018 09:30:56.530077 1460427 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0b81e76c4e4f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ae:bf:e8:f1:22:c8} reservation:<nil>}
	I1018 09:30:56.530388 1460427 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-41e3e621447e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:62:fc:17:ff:cd:8c} reservation:<nil>}
	I1018 09:30:56.530646 1460427 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-0ac75cd444c8 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:c6:40:5a:c4:ba:f2} reservation:<nil>}
	I1018 09:30:56.531054 1460427 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001bdfac0}
	I1018 09:30:56.531072 1460427 network_create.go:124] attempt to create docker network no-preload-886951 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1018 09:30:56.531137 1460427 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-886951 no-preload-886951
	I1018 09:30:56.611722 1460427 network_create.go:108] docker network no-preload-886951 192.168.85.0/24 created
	I1018 09:30:56.611754 1460427 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-886951" container
	I1018 09:30:56.611927 1460427 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 09:30:56.636009 1460427 cli_runner.go:164] Run: docker volume create no-preload-886951 --label name.minikube.sigs.k8s.io=no-preload-886951 --label created_by.minikube.sigs.k8s.io=true
	I1018 09:30:56.663378 1460427 oci.go:103] Successfully created a docker volume no-preload-886951
	I1018 09:30:56.663470 1460427 cli_runner.go:164] Run: docker run --rm --name no-preload-886951-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-886951 --entrypoint /usr/bin/test -v no-preload-886951:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 09:30:56.740824 1460427 cache.go:162] opening:  /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1018 09:30:56.757720 1460427 cache.go:162] opening:  /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1018 09:30:56.759020 1460427 cache.go:162] opening:  /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1018 09:30:56.768820 1460427 cache.go:162] opening:  /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1018 09:30:56.769735 1460427 cache.go:162] opening:  /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1018 09:30:56.772108 1460427 cache.go:162] opening:  /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1018 09:30:56.777668 1460427 cache.go:162] opening:  /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1018 09:30:56.822934 1460427 cache.go:157] /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1018 09:30:56.822963 1460427 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 409.848375ms
	I1018 09:30:56.822977 1460427 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1018 09:30:57.234931 1460427 cache.go:157] /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1018 09:30:57.234956 1460427 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 822.176214ms
	I1018 09:30:57.234969 1460427 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1018 09:30:57.397285 1460427 oci.go:107] Successfully prepared a docker volume no-preload-886951
	I1018 09:30:57.397312 1460427 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1018 09:30:57.397446 1460427 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 09:30:57.397557 1460427 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 09:30:57.473476 1460427 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-886951 --name no-preload-886951 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-886951 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-886951 --network no-preload-886951 --ip 192.168.85.2 --volume no-preload-886951:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 09:30:57.687445 1460427 cache.go:157] /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1018 09:30:57.687516 1460427 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.27508479s
	I1018 09:30:57.687542 1460427 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1018 09:30:57.712587 1460427 cache.go:157] /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1018 09:30:57.712639 1460427 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.300543496s
	I1018 09:30:57.712653 1460427 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1018 09:30:57.859964 1460427 cache.go:157] /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1018 09:30:57.859989 1460427 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.446168234s
	I1018 09:30:57.860089 1460427 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1018 09:30:57.864507 1460427 cache.go:157] /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1018 09:30:57.864530 1460427 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.452907132s
	I1018 09:30:57.864542 1460427 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1018 09:30:58.010182 1460427 cli_runner.go:164] Run: docker container inspect no-preload-886951 --format={{.State.Running}}
	I1018 09:30:58.058856 1460427 cli_runner.go:164] Run: docker container inspect no-preload-886951 --format={{.State.Status}}
	I1018 09:30:58.112269 1460427 cli_runner.go:164] Run: docker exec no-preload-886951 stat /var/lib/dpkg/alternatives/iptables
	I1018 09:30:58.176118 1460427 oci.go:144] the created container "no-preload-886951" has a running status.
	I1018 09:30:58.176145 1460427 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/no-preload-886951/id_rsa...
	I1018 09:30:58.313511 1460427 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/no-preload-886951/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 09:30:58.381283 1460427 cli_runner.go:164] Run: docker container inspect no-preload-886951 --format={{.State.Status}}
	I1018 09:30:58.428252 1460427 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 09:30:58.428276 1460427 kic_runner.go:114] Args: [docker exec --privileged no-preload-886951 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 09:30:58.552639 1460427 cli_runner.go:164] Run: docker container inspect no-preload-886951 --format={{.State.Status}}
	I1018 09:30:58.598043 1460427 machine.go:93] provisionDockerMachine start ...
	I1018 09:30:58.598142 1460427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-886951
	I1018 09:30:58.658474 1460427 main.go:141] libmachine: Using SSH client type: native
	I1018 09:30:58.659146 1460427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34881 <nil> <nil>}
	I1018 09:30:58.659174 1460427 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:30:58.660525 1460427 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33568->127.0.0.1:34881: read: connection reset by peer
	I1018 09:30:58.927606 1460427 cache.go:157] /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1018 09:30:58.927628 1460427 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 2.514182825s
	I1018 09:30:58.927639 1460427 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1018 09:30:58.927651 1460427 cache.go:87] Successfully saved all images to host disk.
	
	
	==> CRI-O <==
	Oct 18 09:30:44 old-k8s-version-136598 crio[650]: time="2025-10-18T09:30:44.279924482Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:30:44 old-k8s-version-136598 crio[650]: time="2025-10-18T09:30:44.287142923Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:30:44 old-k8s-version-136598 crio[650]: time="2025-10-18T09:30:44.287641124Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:30:44 old-k8s-version-136598 crio[650]: time="2025-10-18T09:30:44.313285088Z" level=info msg="Created container 4b8a746cb4e9fe3d6643315df4c667c90b518acd180e1c16a65a071f5685c2b4: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fx8gc/dashboard-metrics-scraper" id=6788bbad-75e6-4831-a41a-618da1f41b43 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:30:44 old-k8s-version-136598 crio[650]: time="2025-10-18T09:30:44.314039922Z" level=info msg="Starting container: 4b8a746cb4e9fe3d6643315df4c667c90b518acd180e1c16a65a071f5685c2b4" id=7cd28d9a-9414-4cf8-9bb1-24bf0edbba2d name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:30:44 old-k8s-version-136598 crio[650]: time="2025-10-18T09:30:44.318707706Z" level=info msg="Started container" PID=1654 containerID=4b8a746cb4e9fe3d6643315df4c667c90b518acd180e1c16a65a071f5685c2b4 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fx8gc/dashboard-metrics-scraper id=7cd28d9a-9414-4cf8-9bb1-24bf0edbba2d name=/runtime.v1.RuntimeService/StartContainer sandboxID=0c9b4ef92d06d71177a5436d9c71631d78ab7ad124b2510a64f289410750e502
	Oct 18 09:30:44 old-k8s-version-136598 conmon[1652]: conmon 4b8a746cb4e9fe3d6643 <ninfo>: container 1654 exited with status 1
	Oct 18 09:30:44 old-k8s-version-136598 crio[650]: time="2025-10-18T09:30:44.521342387Z" level=info msg="Removing container: 49bfd15157cd8161028f63d5795cc44b394da26c576503339cdac4ecb3345f08" id=63882834-b0dd-4701-b4b4-d01c1b09e8f8 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:30:44 old-k8s-version-136598 crio[650]: time="2025-10-18T09:30:44.534584357Z" level=info msg="Error loading conmon cgroup of container 49bfd15157cd8161028f63d5795cc44b394da26c576503339cdac4ecb3345f08: cgroup deleted" id=63882834-b0dd-4701-b4b4-d01c1b09e8f8 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:30:44 old-k8s-version-136598 crio[650]: time="2025-10-18T09:30:44.540734586Z" level=info msg="Removed container 49bfd15157cd8161028f63d5795cc44b394da26c576503339cdac4ecb3345f08: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fx8gc/dashboard-metrics-scraper" id=63882834-b0dd-4701-b4b4-d01c1b09e8f8 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:30:48 old-k8s-version-136598 crio[650]: time="2025-10-18T09:30:48.01527679Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:30:48 old-k8s-version-136598 crio[650]: time="2025-10-18T09:30:48.020552179Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:30:48 old-k8s-version-136598 crio[650]: time="2025-10-18T09:30:48.020755579Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:30:48 old-k8s-version-136598 crio[650]: time="2025-10-18T09:30:48.020848664Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:30:48 old-k8s-version-136598 crio[650]: time="2025-10-18T09:30:48.025445032Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:30:48 old-k8s-version-136598 crio[650]: time="2025-10-18T09:30:48.02547996Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:30:48 old-k8s-version-136598 crio[650]: time="2025-10-18T09:30:48.025501113Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:30:48 old-k8s-version-136598 crio[650]: time="2025-10-18T09:30:48.033787224Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:30:48 old-k8s-version-136598 crio[650]: time="2025-10-18T09:30:48.033823269Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:30:48 old-k8s-version-136598 crio[650]: time="2025-10-18T09:30:48.033845766Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:30:48 old-k8s-version-136598 crio[650]: time="2025-10-18T09:30:48.053515471Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:30:48 old-k8s-version-136598 crio[650]: time="2025-10-18T09:30:48.053561386Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:30:48 old-k8s-version-136598 crio[650]: time="2025-10-18T09:30:48.053586591Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:30:48 old-k8s-version-136598 crio[650]: time="2025-10-18T09:30:48.058763941Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:30:48 old-k8s-version-136598 crio[650]: time="2025-10-18T09:30:48.05880878Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	4b8a746cb4e9f       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           18 seconds ago       Exited              dashboard-metrics-scraper   2                   0c9b4ef92d06d       dashboard-metrics-scraper-5f989dc9cf-fx8gc       kubernetes-dashboard
	8386a4b778835       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           23 seconds ago       Running             storage-provisioner         2                   f3d61f916bf34       storage-provisioner                              kube-system
	e5e655d2337a2       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   32 seconds ago       Running             kubernetes-dashboard        0                   fd6e4146beb1f       kubernetes-dashboard-8694d4445c-c2x4c            kubernetes-dashboard
	5ddf89e58fd02       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           54 seconds ago       Running             coredns                     1                   8d5dd981e449a       coredns-5dd5756b68-6ldkv                         kube-system
	ab92a1b3547c4       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           54 seconds ago       Running             busybox                     1                   05f636ca35362       busybox                                          default
	4ed79a13689cc       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           54 seconds ago       Exited              storage-provisioner         1                   f3d61f916bf34       storage-provisioner                              kube-system
	6bc656f23e97f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           54 seconds ago       Running             kindnet-cni                 1                   ce4fd0dfeb0bd       kindnet-zff87                                    kube-system
	83f2dfcd2a9a0       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           54 seconds ago       Running             kube-proxy                  1                   734d45296b945       kube-proxy-9pwdq                                 kube-system
	eeff6e8a78250       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   35750f0815675       kube-apiserver-old-k8s-version-136598            kube-system
	4744dbee055e1       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   16e0cc36726ab       kube-controller-manager-old-k8s-version-136598   kube-system
	f32a0ff785525       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   e8f50dd31e190       etcd-old-k8s-version-136598                      kube-system
	8eda135cfc037       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   bfda95d10dd61       kube-scheduler-old-k8s-version-136598            kube-system
	
	
	==> coredns [5ddf89e58fd0231dcb686b1ec34b78eba7c7382651d2a6669db94b08a351ad8f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50177 - 22621 "HINFO IN 4422825209495748244.4771825006227619632. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.032205766s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-136598
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-136598
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820
	                    minikube.k8s.io/name=old-k8s-version-136598
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_29_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:28:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-136598
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:30:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:30:37 +0000   Sat, 18 Oct 2025 09:28:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:30:37 +0000   Sat, 18 Oct 2025 09:28:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:30:37 +0000   Sat, 18 Oct 2025 09:28:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:30:37 +0000   Sat, 18 Oct 2025 09:29:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-136598
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                2ec042d2-4e94-4d4b-a1d0-dda9032068a7
	  Boot ID:                    65523ab2-bf15-4ba4-9086-da57024e96a9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-5dd5756b68-6ldkv                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     109s
	  kube-system                 etcd-old-k8s-version-136598                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m3s
	  kube-system                 kindnet-zff87                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-old-k8s-version-136598             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-controller-manager-old-k8s-version-136598    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-proxy-9pwdq                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-old-k8s-version-136598             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-fx8gc        0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-c2x4c             0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 109s               kube-proxy       
	  Normal  Starting                 54s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m3s               kubelet          Node old-k8s-version-136598 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s               kubelet          Node old-k8s-version-136598 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m3s               kubelet          Node old-k8s-version-136598 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m3s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           110s               node-controller  Node old-k8s-version-136598 event: Registered Node old-k8s-version-136598 in Controller
	  Normal  NodeReady                96s                kubelet          Node old-k8s-version-136598 status is now: NodeReady
	  Normal  Starting                 61s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  61s (x8 over 61s)  kubelet          Node old-k8s-version-136598 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x8 over 61s)  kubelet          Node old-k8s-version-136598 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x8 over 61s)  kubelet          Node old-k8s-version-136598 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           43s                node-controller  Node old-k8s-version-136598 event: Registered Node old-k8s-version-136598 in Controller
	
	
	==> dmesg <==
	[Oct18 09:07] overlayfs: idmapped layers are currently not supported
	[ +35.005632] overlayfs: idmapped layers are currently not supported
	[Oct18 09:08] overlayfs: idmapped layers are currently not supported
	[Oct18 09:10] overlayfs: idmapped layers are currently not supported
	[Oct18 09:11] overlayfs: idmapped layers are currently not supported
	[Oct18 09:12] overlayfs: idmapped layers are currently not supported
	[Oct18 09:13] overlayfs: idmapped layers are currently not supported
	[  +9.741593] overlayfs: idmapped layers are currently not supported
	[Oct18 09:14] overlayfs: idmapped layers are currently not supported
	[ +29.155522] overlayfs: idmapped layers are currently not supported
	[Oct18 09:15] overlayfs: idmapped layers are currently not supported
	[ +11.661984] kauditd_printk_skb: 8 callbacks suppressed
	[ +12.494912] overlayfs: idmapped layers are currently not supported
	[Oct18 09:16] overlayfs: idmapped layers are currently not supported
	[Oct18 09:18] overlayfs: idmapped layers are currently not supported
	[Oct18 09:20] overlayfs: idmapped layers are currently not supported
	[  +1.396393] overlayfs: idmapped layers are currently not supported
	[Oct18 09:22] overlayfs: idmapped layers are currently not supported
	[ +27.782946] overlayfs: idmapped layers are currently not supported
	[Oct18 09:25] overlayfs: idmapped layers are currently not supported
	[Oct18 09:26] overlayfs: idmapped layers are currently not supported
	[Oct18 09:27] overlayfs: idmapped layers are currently not supported
	[Oct18 09:28] overlayfs: idmapped layers are currently not supported
	[ +34.309418] overlayfs: idmapped layers are currently not supported
	[Oct18 09:30] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [f32a0ff7855256e0fb8bf0a5004fc3ea08393ad5781721d98ea878404aa56ba5] <==
	{"level":"info","ts":"2025-10-18T09:30:02.120223Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-18T09:30:02.123502Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-10-18T09:30:02.128663Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-10-18T09:30:02.129368Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-18T09:30:02.129701Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T09:30:02.129769Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T09:30:02.141455Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-18T09:30:02.150805Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-18T09:30:02.151Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-18T09:30:02.151881Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-18T09:30:02.151832Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-18T09:30:03.092051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-18T09:30:03.092105Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-18T09:30:03.092123Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-18T09:30:03.092136Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-10-18T09:30:03.092142Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-18T09:30:03.092152Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-10-18T09:30:03.09216Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-18T09:30:03.098645Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-136598 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-18T09:30:03.098694Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T09:30:03.100425Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T09:30:03.109585Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-18T09:30:03.109958Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-18T09:30:03.147912Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-18T09:30:03.148008Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 09:31:02 up 11:13,  0 user,  load average: 2.20, 2.78, 2.42
	Linux old-k8s-version-136598 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6bc656f23e97f2a26e96b76c07156bef906ac7390f71c3992b639108a73b3b77] <==
	I1018 09:30:07.816589       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:30:07.816800       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 09:30:07.816931       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:30:07.816942       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:30:07.816952       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:30:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:30:08.013166       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:30:08.013184       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:30:08.013194       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:30:08.014387       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 09:30:38.013788       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 09:30:38.013802       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 09:30:38.015097       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1018 09:30:38.015110       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1018 09:30:39.614028       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:30:39.614155       1 metrics.go:72] Registering metrics
	I1018 09:30:39.614245       1 controller.go:711] "Syncing nftables rules"
	I1018 09:30:48.014806       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:30:48.014988       1 main.go:301] handling current node
	I1018 09:30:58.022495       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:30:58.022528       1 main.go:301] handling current node
	
	
	==> kube-apiserver [eeff6e8a782500b5b3c99df3f19a42ccbe900a5e69acb513548515106b6b820b] <==
	I1018 09:30:06.544655       1 aggregator.go:166] initial CRD sync complete...
	I1018 09:30:06.544671       1 autoregister_controller.go:141] Starting autoregister controller
	I1018 09:30:06.544678       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 09:30:06.544684       1 cache.go:39] Caches are synced for autoregister controller
	I1018 09:30:06.578444       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1018 09:30:06.579338       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1018 09:30:06.579402       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	E1018 09:30:06.579925       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high"] items=[{},{},{},{},{},{},{},{}]
	E1018 09:30:06.590810       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 09:30:07.332215       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:30:08.282336       1 controller.go:624] quota admission added evaluator for: namespaces
	I1018 09:30:08.327487       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1018 09:30:08.364671       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:30:08.374143       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:30:08.396305       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1018 09:30:08.480588       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.29.166"}
	I1018 09:30:08.505927       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.98.125"}
	E1018 09:30:16.579644       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","global-default","leader-election","node-high","system","workload-high","workload-low","catch-all"] items=[{},{},{},{},{},{},{},{}]
	I1018 09:30:19.163108       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1018 09:30:19.195936       1 controller.go:624] quota admission added evaluator for: endpoints
	I1018 09:30:19.205520       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1018 09:30:26.581417       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["leader-election","node-high","system","workload-high","workload-low","catch-all","exempt","global-default"] items=[{},{},{},{},{},{},{},{}]
	E1018 09:30:36.581838       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["leader-election","node-high","system","workload-high","workload-low","catch-all","exempt","global-default"] items=[{},{},{},{},{},{},{},{}]
	E1018 09:30:46.582920       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["leader-election","node-high","system","workload-high","workload-low","catch-all","exempt","global-default"] items=[{},{},{},{},{},{},{},{}]
	E1018 09:30:56.586128       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","system","workload-high","workload-low","catch-all","exempt","global-default","leader-election"] items=[{},{},{},{},{},{},{},{}]
	
	
	==> kube-controller-manager [4744dbee055e10babc2fda11917557a08f5b523b1ba26af96a30f9d1e4200027] <==
	I1018 09:30:19.280156       1 taint_manager.go:211] "Sending events to api server"
	I1018 09:30:19.280405       1 event.go:307] "Event occurred" object="old-k8s-version-136598" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-136598 event: Registered Node old-k8s-version-136598 in Controller"
	I1018 09:30:19.282961       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1018 09:30:19.287410       1 shared_informer.go:318] Caches are synced for crt configmap
	I1018 09:30:19.289617       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="87.874429ms"
	I1018 09:30:19.302152       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.48497ms"
	I1018 09:30:19.302241       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="40.319µs"
	I1018 09:30:19.321513       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="44.874822ms"
	I1018 09:30:19.333202       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="79.94µs"
	I1018 09:30:19.335449       1 shared_informer.go:318] Caches are synced for resource quota
	I1018 09:30:19.349632       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="28.06788ms"
	I1018 09:30:19.349828       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="76.068µs"
	I1018 09:30:19.391148       1 shared_informer.go:318] Caches are synced for resource quota
	I1018 09:30:19.721308       1 shared_informer.go:318] Caches are synced for garbage collector
	I1018 09:30:19.721346       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1018 09:30:19.749692       1 shared_informer.go:318] Caches are synced for garbage collector
	I1018 09:30:24.457394       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="97.753µs"
	I1018 09:30:25.463427       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="147.795µs"
	I1018 09:30:26.500535       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="112.235µs"
	I1018 09:30:30.505217       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="11.915902ms"
	I1018 09:30:30.505487       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="64.827µs"
	I1018 09:30:41.235733       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.826668ms"
	I1018 09:30:41.235915       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="65.007µs"
	I1018 09:30:44.549556       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="49.771µs"
	I1018 09:30:49.596382       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="72.868µs"
	
	
	==> kube-proxy [83f2dfcd2a9a025f2474d147b1054078bec3a16567dfd26d3cf9d202de3cda59] <==
	I1018 09:30:07.882967       1 server_others.go:69] "Using iptables proxy"
	I1018 09:30:07.912250       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1018 09:30:08.048628       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:30:08.050690       1 server_others.go:152] "Using iptables Proxier"
	I1018 09:30:08.050817       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1018 09:30:08.050851       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1018 09:30:08.050906       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1018 09:30:08.051160       1 server.go:846] "Version info" version="v1.28.0"
	I1018 09:30:08.051380       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:30:08.052199       1 config.go:188] "Starting service config controller"
	I1018 09:30:08.052278       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1018 09:30:08.052336       1 config.go:97] "Starting endpoint slice config controller"
	I1018 09:30:08.052371       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1018 09:30:08.052864       1 config.go:315] "Starting node config controller"
	I1018 09:30:08.053956       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1018 09:30:08.152668       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1018 09:30:08.152711       1 shared_informer.go:318] Caches are synced for service config
	I1018 09:30:08.154132       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [8eda135cfc037fbd8f05a4e7cfe080a910c91820b87edf4d7370b9af44b3bbc5] <==
	I1018 09:30:04.720310       1 serving.go:348] Generated self-signed cert in-memory
	W1018 09:30:06.332908       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 09:30:06.332940       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 09:30:06.332949       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 09:30:06.332959       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 09:30:06.502712       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1018 09:30:06.502750       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:30:06.504855       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1018 09:30:06.511277       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:30:06.511463       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1018 09:30:06.511493       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1018 09:30:06.611922       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 18 09:30:19 old-k8s-version-136598 kubelet[779]: I1018 09:30:19.372118     779 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e97c2dca-e2c5-4d41-8dcc-b60fda13fea8-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-c2x4c\" (UID: \"e97c2dca-e2c5-4d41-8dcc-b60fda13fea8\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-c2x4c"
	Oct 18 09:30:19 old-k8s-version-136598 kubelet[779]: I1018 09:30:19.372347     779 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f6d26dd0-d0cb-4d7c-9c28-5979bac7befa-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-fx8gc\" (UID: \"f6d26dd0-d0cb-4d7c-9c28-5979bac7befa\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fx8gc"
	Oct 18 09:30:19 old-k8s-version-136598 kubelet[779]: I1018 09:30:19.372446     779 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxnpb\" (UniqueName: \"kubernetes.io/projected/e97c2dca-e2c5-4d41-8dcc-b60fda13fea8-kube-api-access-lxnpb\") pod \"kubernetes-dashboard-8694d4445c-c2x4c\" (UID: \"e97c2dca-e2c5-4d41-8dcc-b60fda13fea8\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-c2x4c"
	Oct 18 09:30:19 old-k8s-version-136598 kubelet[779]: I1018 09:30:19.372549     779 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5vb6\" (UniqueName: \"kubernetes.io/projected/f6d26dd0-d0cb-4d7c-9c28-5979bac7befa-kube-api-access-c5vb6\") pod \"dashboard-metrics-scraper-5f989dc9cf-fx8gc\" (UID: \"f6d26dd0-d0cb-4d7c-9c28-5979bac7befa\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fx8gc"
	Oct 18 09:30:19 old-k8s-version-136598 kubelet[779]: W1018 09:30:19.602411     779 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/396852f7b3ff57804dec4cde35210d902d03238e0cea7fff525cafcc0f4703cf/crio-0c9b4ef92d06d71177a5436d9c71631d78ab7ad124b2510a64f289410750e502 WatchSource:0}: Error finding container 0c9b4ef92d06d71177a5436d9c71631d78ab7ad124b2510a64f289410750e502: Status 404 returned error can't find the container with id 0c9b4ef92d06d71177a5436d9c71631d78ab7ad124b2510a64f289410750e502
	Oct 18 09:30:19 old-k8s-version-136598 kubelet[779]: W1018 09:30:19.621645     779 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/396852f7b3ff57804dec4cde35210d902d03238e0cea7fff525cafcc0f4703cf/crio-fd6e4146beb1f362dbcc50e121fe618f14c871f369856970fba090ecd01ca0f9 WatchSource:0}: Error finding container fd6e4146beb1f362dbcc50e121fe618f14c871f369856970fba090ecd01ca0f9: Status 404 returned error can't find the container with id fd6e4146beb1f362dbcc50e121fe618f14c871f369856970fba090ecd01ca0f9
	Oct 18 09:30:24 old-k8s-version-136598 kubelet[779]: I1018 09:30:24.440790     779 scope.go:117] "RemoveContainer" containerID="920279401f75960f18abcbbe4a10d256b29167c101d1e79850cd8e8735b37ab2"
	Oct 18 09:30:25 old-k8s-version-136598 kubelet[779]: I1018 09:30:25.446081     779 scope.go:117] "RemoveContainer" containerID="920279401f75960f18abcbbe4a10d256b29167c101d1e79850cd8e8735b37ab2"
	Oct 18 09:30:25 old-k8s-version-136598 kubelet[779]: I1018 09:30:25.446904     779 scope.go:117] "RemoveContainer" containerID="49bfd15157cd8161028f63d5795cc44b394da26c576503339cdac4ecb3345f08"
	Oct 18 09:30:25 old-k8s-version-136598 kubelet[779]: E1018 09:30:25.447175     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fx8gc_kubernetes-dashboard(f6d26dd0-d0cb-4d7c-9c28-5979bac7befa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fx8gc" podUID="f6d26dd0-d0cb-4d7c-9c28-5979bac7befa"
	Oct 18 09:30:26 old-k8s-version-136598 kubelet[779]: I1018 09:30:26.458803     779 scope.go:117] "RemoveContainer" containerID="49bfd15157cd8161028f63d5795cc44b394da26c576503339cdac4ecb3345f08"
	Oct 18 09:30:26 old-k8s-version-136598 kubelet[779]: E1018 09:30:26.459080     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fx8gc_kubernetes-dashboard(f6d26dd0-d0cb-4d7c-9c28-5979bac7befa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fx8gc" podUID="f6d26dd0-d0cb-4d7c-9c28-5979bac7befa"
	Oct 18 09:30:29 old-k8s-version-136598 kubelet[779]: I1018 09:30:29.573978     779 scope.go:117] "RemoveContainer" containerID="49bfd15157cd8161028f63d5795cc44b394da26c576503339cdac4ecb3345f08"
	Oct 18 09:30:29 old-k8s-version-136598 kubelet[779]: E1018 09:30:29.574307     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fx8gc_kubernetes-dashboard(f6d26dd0-d0cb-4d7c-9c28-5979bac7befa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fx8gc" podUID="f6d26dd0-d0cb-4d7c-9c28-5979bac7befa"
	Oct 18 09:30:38 old-k8s-version-136598 kubelet[779]: I1018 09:30:38.497880     779 scope.go:117] "RemoveContainer" containerID="4ed79a13689cc2b55515752533f3bc48fdf545aa3639d366a9d3b722274b9426"
	Oct 18 09:30:38 old-k8s-version-136598 kubelet[779]: I1018 09:30:38.525119     779 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-c2x4c" podStartSLOduration=9.458979574 podCreationTimestamp="2025-10-18 09:30:19 +0000 UTC" firstStartedPulling="2025-10-18 09:30:19.625250467 +0000 UTC m=+18.618161454" lastFinishedPulling="2025-10-18 09:30:29.691323133 +0000 UTC m=+28.684234128" observedRunningTime="2025-10-18 09:30:30.492864062 +0000 UTC m=+29.485775049" watchObservedRunningTime="2025-10-18 09:30:38.525052248 +0000 UTC m=+37.517963234"
	Oct 18 09:30:44 old-k8s-version-136598 kubelet[779]: I1018 09:30:44.276632     779 scope.go:117] "RemoveContainer" containerID="49bfd15157cd8161028f63d5795cc44b394da26c576503339cdac4ecb3345f08"
	Oct 18 09:30:44 old-k8s-version-136598 kubelet[779]: I1018 09:30:44.515047     779 scope.go:117] "RemoveContainer" containerID="49bfd15157cd8161028f63d5795cc44b394da26c576503339cdac4ecb3345f08"
	Oct 18 09:30:44 old-k8s-version-136598 kubelet[779]: I1018 09:30:44.515492     779 scope.go:117] "RemoveContainer" containerID="4b8a746cb4e9fe3d6643315df4c667c90b518acd180e1c16a65a071f5685c2b4"
	Oct 18 09:30:44 old-k8s-version-136598 kubelet[779]: E1018 09:30:44.515932     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fx8gc_kubernetes-dashboard(f6d26dd0-d0cb-4d7c-9c28-5979bac7befa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fx8gc" podUID="f6d26dd0-d0cb-4d7c-9c28-5979bac7befa"
	Oct 18 09:30:49 old-k8s-version-136598 kubelet[779]: I1018 09:30:49.573033     779 scope.go:117] "RemoveContainer" containerID="4b8a746cb4e9fe3d6643315df4c667c90b518acd180e1c16a65a071f5685c2b4"
	Oct 18 09:30:49 old-k8s-version-136598 kubelet[779]: E1018 09:30:49.574042     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fx8gc_kubernetes-dashboard(f6d26dd0-d0cb-4d7c-9c28-5979bac7befa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fx8gc" podUID="f6d26dd0-d0cb-4d7c-9c28-5979bac7befa"
	Oct 18 09:30:56 old-k8s-version-136598 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 09:30:56 old-k8s-version-136598 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 09:30:56 old-k8s-version-136598 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [e5e655d2337a235dfde07e733ece6522bca028878923664cb1aa29ce8e0720ec] <==
	2025/10/18 09:30:29 Starting overwatch
	2025/10/18 09:30:29 Using namespace: kubernetes-dashboard
	2025/10/18 09:30:29 Using in-cluster config to connect to apiserver
	2025/10/18 09:30:29 Using secret token for csrf signing
	2025/10/18 09:30:29 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 09:30:29 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 09:30:29 Successful initial request to the apiserver, version: v1.28.0
	2025/10/18 09:30:29 Generating JWE encryption key
	2025/10/18 09:30:29 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 09:30:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 09:30:30 Initializing JWE encryption key from synchronized object
	2025/10/18 09:30:30 Creating in-cluster Sidecar client
	2025/10/18 09:30:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 09:30:30 Serving insecurely on HTTP port: 9090
	2025/10/18 09:31:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [4ed79a13689cc2b55515752533f3bc48fdf545aa3639d366a9d3b722274b9426] <==
	I1018 09:30:07.794951       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 09:30:37.800106       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [8386a4b7788357740c93bd64a1e18437f5db13598342e846f345da3ae2796669] <==
	I1018 09:30:38.547568       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 09:30:38.560423       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 09:30:38.560536       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1018 09:30:55.965503       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 09:30:55.965969       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fc909686-371c-405c-ab2b-9cef06488b3d", APIVersion:"v1", ResourceVersion:"632", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-136598_4382fcbc-b3f9-4387-bfbc-3c0b56091c86 became leader
	I1018 09:30:55.966046       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-136598_4382fcbc-b3f9-4387-bfbc-3c0b56091c86!
	I1018 09:30:56.066221       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-136598_4382fcbc-b3f9-4387-bfbc-3c0b56091c86!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-136598 -n old-k8s-version-136598
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-136598 -n old-k8s-version-136598: exit status 2 (546.599129ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-136598 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (7.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-886951 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-886951 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (278.70039ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:32:20Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-886951 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-886951 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-886951 describe deploy/metrics-server -n kube-system: exit status 1 (85.727532ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-886951 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-886951
helpers_test.go:243: (dbg) docker inspect no-preload-886951:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "53265fd5269c62ddd549523e48a4341815211a2547f2f5f74191b59029ce1244",
	        "Created": "2025-10-18T09:30:57.518122221Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1460870,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:30:57.631705825Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/53265fd5269c62ddd549523e48a4341815211a2547f2f5f74191b59029ce1244/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/53265fd5269c62ddd549523e48a4341815211a2547f2f5f74191b59029ce1244/hostname",
	        "HostsPath": "/var/lib/docker/containers/53265fd5269c62ddd549523e48a4341815211a2547f2f5f74191b59029ce1244/hosts",
	        "LogPath": "/var/lib/docker/containers/53265fd5269c62ddd549523e48a4341815211a2547f2f5f74191b59029ce1244/53265fd5269c62ddd549523e48a4341815211a2547f2f5f74191b59029ce1244-json.log",
	        "Name": "/no-preload-886951",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-886951:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-886951",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "53265fd5269c62ddd549523e48a4341815211a2547f2f5f74191b59029ce1244",
	                "LowerDir": "/var/lib/docker/overlay2/dabda70568a757d99fa96e3a83030950f504116ab4097bb4c8b4336b13256c1f-init/diff:/var/lib/docker/overlay2/60519750c2737db0dfdb37adf468f6414129c2b5dcb7218ea59afbc63bfd537f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dabda70568a757d99fa96e3a83030950f504116ab4097bb4c8b4336b13256c1f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dabda70568a757d99fa96e3a83030950f504116ab4097bb4c8b4336b13256c1f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dabda70568a757d99fa96e3a83030950f504116ab4097bb4c8b4336b13256c1f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-886951",
	                "Source": "/var/lib/docker/volumes/no-preload-886951/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-886951",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-886951",
	                "name.minikube.sigs.k8s.io": "no-preload-886951",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2523e815d1f43806961fe7e630b43d86e6dbcf92755c49bdecfae28ce2249151",
	            "SandboxKey": "/var/run/docker/netns/2523e815d1f4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34881"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34882"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34885"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34883"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34884"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-886951": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e2:03:10:50:f7:05",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3e5f60352a068e220fd2810f4516a5014f16c78647f632d14d145d4ec80d9b4f",
	                    "EndpointID": "99cad04735123235206c63578976d0046a6cdea446d88d93542c7ca9f5d58d43",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-886951",
	                        "53265fd5269c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-886951 -n no-preload-886951
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-886951 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-886951 logs -n 25: (1.196820184s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ delete  │ -p cilium-275703                                                                                                                                                                                                                              │ cilium-275703             │ jenkins │ v1.37.0 │ 18 Oct 25 09:26 UTC │ 18 Oct 25 09:26 UTC │
	│ start   │ -p force-systemd-env-406177 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-406177  │ jenkins │ v1.37.0 │ 18 Oct 25 09:26 UTC │ 18 Oct 25 09:26 UTC │
	│ delete  │ -p force-systemd-env-406177                                                                                                                                                                                                                   │ force-systemd-env-406177  │ jenkins │ v1.37.0 │ 18 Oct 25 09:26 UTC │ 18 Oct 25 09:26 UTC │
	│ start   │ -p cert-expiration-854768 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-854768    │ jenkins │ v1.37.0 │ 18 Oct 25 09:26 UTC │ 18 Oct 25 09:27 UTC │
	│ start   │ -p kubernetes-upgrade-757858 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-757858 │ jenkins │ v1.37.0 │ 18 Oct 25 09:27 UTC │                     │
	│ start   │ -p kubernetes-upgrade-757858 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-757858 │ jenkins │ v1.37.0 │ 18 Oct 25 09:27 UTC │ 18 Oct 25 09:27 UTC │
	│ delete  │ -p kubernetes-upgrade-757858                                                                                                                                                                                                                  │ kubernetes-upgrade-757858 │ jenkins │ v1.37.0 │ 18 Oct 25 09:27 UTC │ 18 Oct 25 09:27 UTC │
	│ start   │ -p cert-options-783705 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-783705       │ jenkins │ v1.37.0 │ 18 Oct 25 09:27 UTC │ 18 Oct 25 09:28 UTC │
	│ ssh     │ cert-options-783705 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-783705       │ jenkins │ v1.37.0 │ 18 Oct 25 09:28 UTC │ 18 Oct 25 09:28 UTC │
	│ ssh     │ -p cert-options-783705 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-783705       │ jenkins │ v1.37.0 │ 18 Oct 25 09:28 UTC │ 18 Oct 25 09:28 UTC │
	│ delete  │ -p cert-options-783705                                                                                                                                                                                                                        │ cert-options-783705       │ jenkins │ v1.37.0 │ 18 Oct 25 09:28 UTC │ 18 Oct 25 09:28 UTC │
	│ start   │ -p old-k8s-version-136598 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-136598    │ jenkins │ v1.37.0 │ 18 Oct 25 09:28 UTC │ 18 Oct 25 09:29 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-136598 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-136598    │ jenkins │ v1.37.0 │ 18 Oct 25 09:29 UTC │                     │
	│ stop    │ -p old-k8s-version-136598 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-136598    │ jenkins │ v1.37.0 │ 18 Oct 25 09:29 UTC │ 18 Oct 25 09:29 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-136598 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-136598    │ jenkins │ v1.37.0 │ 18 Oct 25 09:29 UTC │ 18 Oct 25 09:29 UTC │
	│ start   │ -p old-k8s-version-136598 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-136598    │ jenkins │ v1.37.0 │ 18 Oct 25 09:29 UTC │ 18 Oct 25 09:30 UTC │
	│ start   │ -p cert-expiration-854768 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-854768    │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │ 18 Oct 25 09:30 UTC │
	│ delete  │ -p cert-expiration-854768                                                                                                                                                                                                                     │ cert-expiration-854768    │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │ 18 Oct 25 09:30 UTC │
	│ image   │ old-k8s-version-136598 image list --format=json                                                                                                                                                                                               │ old-k8s-version-136598    │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │ 18 Oct 25 09:30 UTC │
	│ pause   │ -p old-k8s-version-136598 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-136598    │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │                     │
	│ start   │ -p no-preload-886951 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-886951         │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │ 18 Oct 25 09:32 UTC │
	│ delete  │ -p old-k8s-version-136598                                                                                                                                                                                                                     │ old-k8s-version-136598    │ jenkins │ v1.37.0 │ 18 Oct 25 09:31 UTC │ 18 Oct 25 09:31 UTC │
	│ delete  │ -p old-k8s-version-136598                                                                                                                                                                                                                     │ old-k8s-version-136598    │ jenkins │ v1.37.0 │ 18 Oct 25 09:31 UTC │ 18 Oct 25 09:31 UTC │
	│ start   │ -p embed-certs-559379 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-559379        │ jenkins │ v1.37.0 │ 18 Oct 25 09:31 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-886951 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-886951         │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:31:07
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:31:07.685423 1463591 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:31:07.685530 1463591 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:31:07.685535 1463591 out.go:374] Setting ErrFile to fd 2...
	I1018 09:31:07.685540 1463591 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:31:07.685794 1463591 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 09:31:07.686184 1463591 out.go:368] Setting JSON to false
	I1018 09:31:07.700896 1463591 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":40415,"bootTime":1760739453,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1018 09:31:07.701074 1463591 start.go:141] virtualization:  
	I1018 09:31:07.705699 1463591 out.go:179] * [embed-certs-559379] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 09:31:07.709807 1463591 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 09:31:07.709952 1463591 notify.go:220] Checking for updates...
	I1018 09:31:07.717945 1463591 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:31:07.721270 1463591 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 09:31:07.729735 1463591 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-1274243/.minikube
	I1018 09:31:07.733040 1463591 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 09:31:07.736099 1463591 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:31:07.739750 1463591 config.go:182] Loaded profile config "no-preload-886951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:31:07.739988 1463591 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:31:07.836031 1463591 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 09:31:07.836145 1463591 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:31:08.073598 1463591 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:68 SystemTime:2025-10-18 09:31:08.058123757 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:31:08.073726 1463591 docker.go:318] overlay module found
	I1018 09:31:08.076971 1463591 out.go:179] * Using the docker driver based on user configuration
	I1018 09:31:08.080162 1463591 start.go:305] selected driver: docker
	I1018 09:31:08.080180 1463591 start.go:925] validating driver "docker" against <nil>
	I1018 09:31:08.080201 1463591 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:31:08.080912 1463591 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:31:08.250092 1463591 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:68 SystemTime:2025-10-18 09:31:08.239740978 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:31:08.250240 1463591 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 09:31:08.250455 1463591 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:31:08.253453 1463591 out.go:179] * Using Docker driver with root privileges
	I1018 09:31:08.256299 1463591 cni.go:84] Creating CNI manager for ""
	I1018 09:31:08.256378 1463591 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:31:08.256392 1463591 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 09:31:08.256476 1463591 start.go:349] cluster config:
	{Name:embed-certs-559379 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-559379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:31:08.259565 1463591 out.go:179] * Starting "embed-certs-559379" primary control-plane node in "embed-certs-559379" cluster
	I1018 09:31:08.262378 1463591 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:31:08.265292 1463591 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:31:08.268077 1463591 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:31:08.268148 1463591 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 09:31:08.268179 1463591 cache.go:58] Caching tarball of preloaded images
	I1018 09:31:08.268268 1463591 preload.go:233] Found /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 09:31:08.268282 1463591 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 09:31:08.268389 1463591 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379/config.json ...
	I1018 09:31:08.268414 1463591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379/config.json: {Name:mk322d55d02e18d4c5e9d6a3ebec2dc12a1f86a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:08.268582 1463591 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:31:08.292500 1463591 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:31:08.292527 1463591 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:31:08.292541 1463591 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:31:08.292563 1463591 start.go:360] acquireMachinesLock for embed-certs-559379: {Name:mk418755d6e5d94c4c79fcae2f644d56877c0df8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:31:08.292670 1463591 start.go:364] duration metric: took 87.464µs to acquireMachinesLock for "embed-certs-559379"
	I1018 09:31:08.292700 1463591 start.go:93] Provisioning new machine with config: &{Name:embed-certs-559379 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-559379 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:31:08.292771 1463591 start.go:125] createHost starting for "" (driver="docker")
	I1018 09:31:06.144810 1460427 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1018 09:31:06.144852 1460427 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1018 09:31:06.144915 1460427 ssh_runner.go:195] Run: which crictl
	I1018 09:31:06.145017 1460427 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1018 09:31:06.145075 1460427 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1018 09:31:06.201138 1460427 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1018 09:31:06.201223 1460427 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1018 09:31:06.201286 1460427 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1018 09:31:06.201340 1460427 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 09:31:06.275158 1460427 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1018 09:31:06.275233 1460427 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1018 09:31:06.275360 1460427 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1018 09:31:06.424605 1460427 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 09:31:06.424713 1460427 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1018 09:31:06.424780 1460427 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1018 09:31:06.424915 1460427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1018 09:31:06.424999 1460427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1018 09:31:06.425079 1460427 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1018 09:31:06.429564 1460427 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1018 09:31:06.429641 1460427 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1018 09:31:06.429704 1460427 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1018 09:31:06.552658 1460427 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1018 09:31:06.552693 1460427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1018 09:31:06.552772 1460427 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1018 09:31:06.552858 1460427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1018 09:31:06.552931 1460427 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1018 09:31:06.552998 1460427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1018 09:31:06.553071 1460427 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1018 09:31:06.553083 1460427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1018 09:31:06.553130 1460427 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1018 09:31:06.553171 1460427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1018 09:31:06.553238 1460427 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1018 09:31:06.553278 1460427 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1018 09:31:06.553323 1460427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1018 09:31:06.631015 1460427 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1018 09:31:06.631049 1460427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1018 09:31:06.631139 1460427 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1018 09:31:06.631161 1460427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1018 09:31:06.631198 1460427 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1018 09:31:06.631214 1460427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1018 09:31:06.688012 1460427 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1018 09:31:06.688090 1460427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1018 09:31:06.688638 1460427 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1018 09:31:06.688794 1460427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1018 09:31:06.718110 1460427 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1018 09:31:06.718220 1460427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	W1018 09:31:06.794348 1460427 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1018 09:31:06.794559 1460427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:31:06.815749 1460427 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1018 09:31:06.815787 1460427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1018 09:31:07.244176 1460427 cache_images.go:117] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1018 09:31:07.244268 1460427 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:31:07.244362 1460427 ssh_runner.go:195] Run: which crictl
	I1018 09:31:07.250419 1460427 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1018 09:31:07.275960 1460427 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:31:07.362020 1460427 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:31:07.396113 1460427 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1018 09:31:07.396227 1460427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1018 09:31:07.472342 1460427 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:31:09.956079 1460427 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (2.559803883s)
	I1018 09:31:09.956103 1460427 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1018 09:31:09.956121 1460427 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1018 09:31:09.956169 1460427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1018 09:31:09.956221 1460427 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.483860848s)
	I1018 09:31:09.956244 1460427 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1018 09:31:09.956313 1460427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1018 09:31:08.296238 1463591 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 09:31:08.296490 1463591 start.go:159] libmachine.API.Create for "embed-certs-559379" (driver="docker")
	I1018 09:31:08.296526 1463591 client.go:168] LocalClient.Create starting
	I1018 09:31:08.296581 1463591 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem
	I1018 09:31:08.296617 1463591 main.go:141] libmachine: Decoding PEM data...
	I1018 09:31:08.296639 1463591 main.go:141] libmachine: Parsing certificate...
	I1018 09:31:08.296698 1463591 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem
	I1018 09:31:08.296720 1463591 main.go:141] libmachine: Decoding PEM data...
	I1018 09:31:08.296733 1463591 main.go:141] libmachine: Parsing certificate...
	I1018 09:31:08.297099 1463591 cli_runner.go:164] Run: docker network inspect embed-certs-559379 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 09:31:08.319520 1463591 cli_runner.go:211] docker network inspect embed-certs-559379 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 09:31:08.319606 1463591 network_create.go:284] running [docker network inspect embed-certs-559379] to gather additional debugging logs...
	I1018 09:31:08.319627 1463591 cli_runner.go:164] Run: docker network inspect embed-certs-559379
	W1018 09:31:08.336337 1463591 cli_runner.go:211] docker network inspect embed-certs-559379 returned with exit code 1
	I1018 09:31:08.336371 1463591 network_create.go:287] error running [docker network inspect embed-certs-559379]: docker network inspect embed-certs-559379: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-559379 not found
	I1018 09:31:08.336385 1463591 network_create.go:289] output of [docker network inspect embed-certs-559379]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-559379 not found
	
	** /stderr **
	I1018 09:31:08.336477 1463591 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:31:08.362148 1463591 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-521f8f572997 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3a:7e:e5:c0:67:29} reservation:<nil>}
	I1018 09:31:08.362556 1463591 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0b81e76c4e4f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ae:bf:e8:f1:22:c8} reservation:<nil>}
	I1018 09:31:08.362880 1463591 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-41e3e621447e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:62:fc:17:ff:cd:8c} reservation:<nil>}
	I1018 09:31:08.363292 1463591 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d7ae0}
	I1018 09:31:08.363319 1463591 network_create.go:124] attempt to create docker network embed-certs-559379 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1018 09:31:08.363374 1463591 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-559379 embed-certs-559379
	I1018 09:31:08.441169 1463591 network_create.go:108] docker network embed-certs-559379 192.168.76.0/24 created
	I1018 09:31:08.441202 1463591 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-559379" container
	I1018 09:31:08.441285 1463591 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 09:31:08.465731 1463591 cli_runner.go:164] Run: docker volume create embed-certs-559379 --label name.minikube.sigs.k8s.io=embed-certs-559379 --label created_by.minikube.sigs.k8s.io=true
	I1018 09:31:08.490006 1463591 oci.go:103] Successfully created a docker volume embed-certs-559379
	I1018 09:31:08.490100 1463591 cli_runner.go:164] Run: docker run --rm --name embed-certs-559379-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-559379 --entrypoint /usr/bin/test -v embed-certs-559379:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 09:31:09.613872 1463591 cli_runner.go:217] Completed: docker run --rm --name embed-certs-559379-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-559379 --entrypoint /usr/bin/test -v embed-certs-559379:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib: (1.123731781s)
	I1018 09:31:09.613904 1463591 oci.go:107] Successfully prepared a docker volume embed-certs-559379
	I1018 09:31:09.613929 1463591 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:31:09.613948 1463591 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 09:31:09.614027 1463591 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-559379:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 09:31:11.539777 1460427 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.583443703s)
	I1018 09:31:11.539804 1460427 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1018 09:31:11.539826 1460427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1018 09:31:11.539974 1460427 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.58379493s)
	I1018 09:31:11.539984 1460427 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1018 09:31:11.540000 1460427 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1018 09:31:11.540050 1460427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1018 09:31:13.707681 1460427 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (2.167608841s)
	I1018 09:31:13.707712 1460427 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1018 09:31:13.707736 1460427 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1018 09:31:13.707787 1460427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1018 09:31:14.864769 1463591 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-559379:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (5.25070333s)
	I1018 09:31:14.864814 1463591 kic.go:203] duration metric: took 5.25086266s to extract preloaded images to volume ...
	W1018 09:31:14.864959 1463591 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 09:31:14.865069 1463591 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 09:31:14.949145 1463591 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-559379 --name embed-certs-559379 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-559379 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-559379 --network embed-certs-559379 --ip 192.168.76.2 --volume embed-certs-559379:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 09:31:15.387000 1463591 cli_runner.go:164] Run: docker container inspect embed-certs-559379 --format={{.State.Running}}
	I1018 09:31:15.418411 1463591 cli_runner.go:164] Run: docker container inspect embed-certs-559379 --format={{.State.Status}}
	I1018 09:31:15.441104 1463591 cli_runner.go:164] Run: docker exec embed-certs-559379 stat /var/lib/dpkg/alternatives/iptables
	I1018 09:31:15.491276 1463591 oci.go:144] the created container "embed-certs-559379" has a running status.
	I1018 09:31:15.491315 1463591 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/embed-certs-559379/id_rsa...
	I1018 09:31:16.043223 1463591 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/embed-certs-559379/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 09:31:16.076189 1463591 cli_runner.go:164] Run: docker container inspect embed-certs-559379 --format={{.State.Status}}
	I1018 09:31:16.119673 1463591 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 09:31:16.119696 1463591 kic_runner.go:114] Args: [docker exec --privileged embed-certs-559379 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 09:31:16.180544 1463591 cli_runner.go:164] Run: docker container inspect embed-certs-559379 --format={{.State.Status}}
	I1018 09:31:16.211648 1463591 machine.go:93] provisionDockerMachine start ...
	I1018 09:31:16.211741 1463591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-559379
	I1018 09:31:16.229969 1463591 main.go:141] libmachine: Using SSH client type: native
	I1018 09:31:16.230289 1463591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34886 <nil> <nil>}
	I1018 09:31:16.230298 1463591 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:31:16.230968 1463591 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 09:31:16.241390 1460427 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (2.533577081s)
	I1018 09:31:16.241418 1460427 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1018 09:31:16.241435 1460427 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1018 09:31:16.241481 1460427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1018 09:31:17.868132 1460427 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.626625063s)
	I1018 09:31:17.868161 1460427 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1018 09:31:17.868180 1460427 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1018 09:31:17.868232 1460427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1018 09:31:19.387423 1463591 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-559379
	
	I1018 09:31:19.387498 1463591 ubuntu.go:182] provisioning hostname "embed-certs-559379"
	I1018 09:31:19.387590 1463591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-559379
	I1018 09:31:19.406677 1463591 main.go:141] libmachine: Using SSH client type: native
	I1018 09:31:19.406986 1463591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34886 <nil> <nil>}
	I1018 09:31:19.407004 1463591 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-559379 && echo "embed-certs-559379" | sudo tee /etc/hostname
	I1018 09:31:19.586173 1463591 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-559379
	
	I1018 09:31:19.586289 1463591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-559379
	I1018 09:31:19.620000 1463591 main.go:141] libmachine: Using SSH client type: native
	I1018 09:31:19.620314 1463591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34886 <nil> <nil>}
	I1018 09:31:19.620336 1463591 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-559379' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-559379/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-559379' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:31:19.799979 1463591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:31:19.800050 1463591 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-1274243/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-1274243/.minikube}
	I1018 09:31:19.800082 1463591 ubuntu.go:190] setting up certificates
	I1018 09:31:19.800122 1463591 provision.go:84] configureAuth start
	I1018 09:31:19.800241 1463591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-559379
	I1018 09:31:19.825414 1463591 provision.go:143] copyHostCerts
	I1018 09:31:19.825476 1463591 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem, removing ...
	I1018 09:31:19.825485 1463591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem
	I1018 09:31:19.825550 1463591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem (1078 bytes)
	I1018 09:31:19.825634 1463591 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem, removing ...
	I1018 09:31:19.825639 1463591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem
	I1018 09:31:19.825664 1463591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem (1123 bytes)
	I1018 09:31:19.825817 1463591 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem, removing ...
	I1018 09:31:19.825825 1463591 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem
	I1018 09:31:19.825858 1463591 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem (1675 bytes)
	I1018 09:31:19.825933 1463591 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem org=jenkins.embed-certs-559379 san=[127.0.0.1 192.168.76.2 embed-certs-559379 localhost minikube]
	I1018 09:31:19.963570 1463591 provision.go:177] copyRemoteCerts
	I1018 09:31:19.963638 1463591 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:31:19.963682 1463591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-559379
	I1018 09:31:19.981589 1463591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34886 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/embed-certs-559379/id_rsa Username:docker}
	I1018 09:31:20.099060 1463591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1018 09:31:20.132040 1463591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 09:31:20.157449 1463591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:31:20.184811 1463591 provision.go:87] duration metric: took 384.649395ms to configureAuth
	I1018 09:31:20.184838 1463591 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:31:20.185038 1463591 config.go:182] Loaded profile config "embed-certs-559379": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:31:20.185141 1463591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-559379
	I1018 09:31:20.208036 1463591 main.go:141] libmachine: Using SSH client type: native
	I1018 09:31:20.208357 1463591 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34886 <nil> <nil>}
	I1018 09:31:20.208371 1463591 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:31:20.517851 1463591 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:31:20.517875 1463591 machine.go:96] duration metric: took 4.30620991s to provisionDockerMachine
	I1018 09:31:20.517885 1463591 client.go:171] duration metric: took 12.221349859s to LocalClient.Create
	I1018 09:31:20.517900 1463591 start.go:167] duration metric: took 12.221409886s to libmachine.API.Create "embed-certs-559379"
	I1018 09:31:20.517907 1463591 start.go:293] postStartSetup for "embed-certs-559379" (driver="docker")
	I1018 09:31:20.517916 1463591 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:31:20.517983 1463591 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:31:20.518025 1463591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-559379
	I1018 09:31:20.549505 1463591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34886 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/embed-certs-559379/id_rsa Username:docker}
	I1018 09:31:20.665009 1463591 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:31:20.669149 1463591 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:31:20.669227 1463591 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:31:20.669266 1463591 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-1274243/.minikube/addons for local assets ...
	I1018 09:31:20.669360 1463591 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-1274243/.minikube/files for local assets ...
	I1018 09:31:20.669485 1463591 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem -> 12760972.pem in /etc/ssl/certs
	I1018 09:31:20.669641 1463591 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:31:20.677997 1463591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem --> /etc/ssl/certs/12760972.pem (1708 bytes)
	I1018 09:31:20.698649 1463591 start.go:296] duration metric: took 180.727056ms for postStartSetup
	I1018 09:31:20.699132 1463591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-559379
	I1018 09:31:20.725956 1463591 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379/config.json ...
	I1018 09:31:20.726239 1463591 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:31:20.726298 1463591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-559379
	I1018 09:31:20.743033 1463591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34886 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/embed-certs-559379/id_rsa Username:docker}
	I1018 09:31:20.845709 1463591 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:31:20.853431 1463591 start.go:128] duration metric: took 12.5606448s to createHost
	I1018 09:31:20.853456 1463591 start.go:83] releasing machines lock for "embed-certs-559379", held for 12.560773043s
	I1018 09:31:20.853536 1463591 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-559379
	I1018 09:31:20.873351 1463591 ssh_runner.go:195] Run: cat /version.json
	I1018 09:31:20.873411 1463591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-559379
	I1018 09:31:20.873703 1463591 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:31:20.873752 1463591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-559379
	I1018 09:31:20.897261 1463591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34886 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/embed-certs-559379/id_rsa Username:docker}
	I1018 09:31:20.913388 1463591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34886 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/embed-certs-559379/id_rsa Username:docker}
	I1018 09:31:21.024336 1463591 ssh_runner.go:195] Run: systemctl --version
	I1018 09:31:21.146699 1463591 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:31:21.197100 1463591 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:31:21.203333 1463591 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:31:21.203479 1463591 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:31:21.249474 1463591 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 09:31:21.249500 1463591 start.go:495] detecting cgroup driver to use...
	I1018 09:31:21.249531 1463591 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 09:31:21.249592 1463591 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:31:21.271784 1463591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:31:21.286381 1463591 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:31:21.286467 1463591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:31:21.304607 1463591 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:31:21.324284 1463591 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:31:21.473812 1463591 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:31:21.634674 1463591 docker.go:234] disabling docker service ...
	I1018 09:31:21.634751 1463591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:31:21.659282 1463591 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:31:21.673814 1463591 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:31:21.829038 1463591 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:31:21.988062 1463591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:31:22.006250 1463591 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:31:22.028356 1463591 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:31:22.028499 1463591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:31:22.037936 1463591 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 09:31:22.038047 1463591 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:31:22.047918 1463591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:31:22.057747 1463591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:31:22.070958 1463591 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:31:22.079885 1463591 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:31:22.090720 1463591 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:31:22.105240 1463591 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:31:22.114426 1463591 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:31:22.122705 1463591 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:31:22.130639 1463591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:31:22.256210 1463591 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:31:22.702988 1463591 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:31:22.703110 1463591 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:31:22.707584 1463591 start.go:563] Will wait 60s for crictl version
	I1018 09:31:22.707703 1463591 ssh_runner.go:195] Run: which crictl
	I1018 09:31:22.711957 1463591 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:31:22.747236 1463591 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:31:22.747384 1463591 ssh_runner.go:195] Run: crio --version
	I1018 09:31:22.783332 1463591 ssh_runner.go:195] Run: crio --version
	I1018 09:31:22.819727 1463591 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:31:22.249234 1460427 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (4.380976294s)
	I1018 09:31:22.249262 1460427 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1018 09:31:22.249280 1460427 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1018 09:31:22.249325 1460427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1018 09:31:22.953199 1460427 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1018 09:31:22.953238 1460427 cache_images.go:124] Successfully loaded all cached images
	I1018 09:31:22.953244 1460427 cache_images.go:93] duration metric: took 17.451857228s to LoadCachedImages
	I1018 09:31:22.953255 1460427 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1018 09:31:22.953344 1460427 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-886951 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-886951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:31:22.953425 1460427 ssh_runner.go:195] Run: crio config
	I1018 09:31:23.044901 1460427 cni.go:84] Creating CNI manager for ""
	I1018 09:31:23.044920 1460427 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:31:23.044936 1460427 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:31:23.044958 1460427 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-886951 NodeName:no-preload-886951 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:31:23.045069 1460427 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-886951"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:31:23.045124 1460427 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:31:23.055408 1460427 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1018 09:31:23.055473 1460427 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1018 09:31:23.066982 1460427 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1018 09:31:23.067072 1460427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1018 09:31:23.067914 1460427 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1018 09:31:23.068330 1460427 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1018 09:31:23.073060 1460427 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1018 09:31:23.073091 1460427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1018 09:31:23.921643 1460427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:31:23.944039 1460427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1018 09:31:23.951916 1460427 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1018 09:31:23.951993 1460427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1018 09:31:24.040308 1460427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1018 09:31:24.065904 1460427 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1018 09:31:24.065950 1460427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1018 09:31:24.861270 1460427 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:31:24.869954 1460427 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1018 09:31:24.883969 1460427 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:31:24.899481 1460427 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1018 09:31:24.914562 1460427 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:31:24.919669 1460427 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:31:24.931822 1460427 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:31:25.081111 1460427 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:31:25.099919 1460427 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951 for IP: 192.168.85.2
	I1018 09:31:25.099938 1460427 certs.go:195] generating shared ca certs ...
	I1018 09:31:25.099955 1460427 certs.go:227] acquiring lock for ca certs: {Name:mk38b7543cc5a8f0209b20a590824c94ad8d0609 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:25.100099 1460427 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.key
	I1018 09:31:25.100147 1460427 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.key
	I1018 09:31:25.100155 1460427 certs.go:257] generating profile certs ...
	I1018 09:31:25.100214 1460427 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/client.key
	I1018 09:31:25.100226 1460427 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/client.crt with IP's: []
	I1018 09:31:22.822806 1463591 cli_runner.go:164] Run: docker network inspect embed-certs-559379 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:31:22.841511 1463591 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 09:31:22.845828 1463591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:31:22.855924 1463591 kubeadm.go:883] updating cluster {Name:embed-certs-559379 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-559379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:31:22.856041 1463591 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:31:22.856091 1463591 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:31:22.901166 1463591 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:31:22.901186 1463591 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:31:22.901241 1463591 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:31:22.936541 1463591 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:31:22.936566 1463591 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:31:22.936575 1463591 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1018 09:31:22.936656 1463591 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-559379 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-559379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:31:22.936749 1463591 ssh_runner.go:195] Run: crio config
	I1018 09:31:23.044119 1463591 cni.go:84] Creating CNI manager for ""
	I1018 09:31:23.044191 1463591 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:31:23.044221 1463591 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:31:23.044279 1463591 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-559379 NodeName:embed-certs-559379 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:31:23.044447 1463591 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-559379"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:31:23.044569 1463591 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:31:23.052862 1463591 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:31:23.052983 1463591 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:31:23.060973 1463591 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1018 09:31:23.097056 1463591 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:31:23.135494 1463591 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1018 09:31:23.166594 1463591 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:31:23.181867 1463591 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:31:23.220906 1463591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:31:23.571078 1463591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:31:23.598199 1463591 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379 for IP: 192.168.76.2
	I1018 09:31:23.598222 1463591 certs.go:195] generating shared ca certs ...
	I1018 09:31:23.598238 1463591 certs.go:227] acquiring lock for ca certs: {Name:mk38b7543cc5a8f0209b20a590824c94ad8d0609 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:23.598395 1463591 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.key
	I1018 09:31:23.598433 1463591 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.key
	I1018 09:31:23.598440 1463591 certs.go:257] generating profile certs ...
	I1018 09:31:23.598508 1463591 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379/client.key
	I1018 09:31:23.598519 1463591 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379/client.crt with IP's: []
	I1018 09:31:24.723560 1463591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379/client.crt ...
	I1018 09:31:24.724274 1463591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379/client.crt: {Name:mk3ec14a8d8a78350319905cda94cbd97af3b0f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:24.724840 1463591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379/client.key ...
	I1018 09:31:24.725122 1463591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379/client.key: {Name:mk40c7e8cd3be15b27d46738a53649f2f9792cde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:24.725712 1463591 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379/apiserver.key.9dbb2352
	I1018 09:31:24.725966 1463591 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379/apiserver.crt.9dbb2352 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1018 09:31:25.819516 1463591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379/apiserver.crt.9dbb2352 ...
	I1018 09:31:25.819596 1463591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379/apiserver.crt.9dbb2352: {Name:mk4560c026bf572891d071dfbd63f676eb63938f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:25.819822 1463591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379/apiserver.key.9dbb2352 ...
	I1018 09:31:25.819884 1463591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379/apiserver.key.9dbb2352: {Name:mkf76d7f6208b3848caf4d6f6d147d6ec027e270 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:25.820018 1463591 certs.go:382] copying /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379/apiserver.crt.9dbb2352 -> /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379/apiserver.crt
	I1018 09:31:25.820147 1463591 certs.go:386] copying /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379/apiserver.key.9dbb2352 -> /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379/apiserver.key
	I1018 09:31:25.820254 1463591 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379/proxy-client.key
	I1018 09:31:25.820290 1463591 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379/proxy-client.crt with IP's: []
	I1018 09:31:26.453145 1463591 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379/proxy-client.crt ...
	I1018 09:31:26.453219 1463591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379/proxy-client.crt: {Name:mk9321b59c83a4b37362a41efc6621aa80d2141a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:26.453425 1463591 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379/proxy-client.key ...
	I1018 09:31:26.453462 1463591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379/proxy-client.key: {Name:mk81fcdd0206381488bc8284cca7a12325780773 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:26.453686 1463591 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097.pem (1338 bytes)
	W1018 09:31:26.453751 1463591 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097_empty.pem, impossibly tiny 0 bytes
	I1018 09:31:26.453777 1463591 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:31:26.453831 1463591 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:31:26.453881 1463591 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:31:26.453938 1463591 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem (1675 bytes)
	I1018 09:31:26.454006 1463591 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem (1708 bytes)
	I1018 09:31:26.454629 1463591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:31:26.474952 1463591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 09:31:26.491259 1463591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:31:26.507137 1463591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:31:26.523522 1463591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1018 09:31:26.539565 1463591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 09:31:26.556021 1463591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:31:26.572258 1463591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 09:31:26.588495 1463591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097.pem --> /usr/share/ca-certificates/1276097.pem (1338 bytes)
	I1018 09:31:26.604409 1463591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem --> /usr/share/ca-certificates/12760972.pem (1708 bytes)
	I1018 09:31:26.620864 1463591 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:31:26.637782 1463591 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:31:26.665695 1463591 ssh_runner.go:195] Run: openssl version
	I1018 09:31:26.672558 1463591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1276097.pem && ln -fs /usr/share/ca-certificates/1276097.pem /etc/ssl/certs/1276097.pem"
	I1018 09:31:26.689950 1463591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1276097.pem
	I1018 09:31:26.694347 1463591 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 08:36 /usr/share/ca-certificates/1276097.pem
	I1018 09:31:26.694465 1463591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1276097.pem
	I1018 09:31:26.739520 1463591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1276097.pem /etc/ssl/certs/51391683.0"
	I1018 09:31:26.752132 1463591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12760972.pem && ln -fs /usr/share/ca-certificates/12760972.pem /etc/ssl/certs/12760972.pem"
	I1018 09:31:26.762222 1463591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12760972.pem
	I1018 09:31:26.769182 1463591 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 08:36 /usr/share/ca-certificates/12760972.pem
	I1018 09:31:26.769259 1463591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12760972.pem
	I1018 09:31:26.817205 1463591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12760972.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:31:26.827158 1463591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:31:26.836613 1463591 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:31:26.840885 1463591 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:31:26.840977 1463591 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:31:26.887196 1463591 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:31:26.909375 1463591 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:31:26.914141 1463591 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 09:31:26.914200 1463591 kubeadm.go:400] StartCluster: {Name:embed-certs-559379 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-559379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:31:26.914280 1463591 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:31:26.914350 1463591 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:31:26.970298 1463591 cri.go:89] found id: ""
	I1018 09:31:26.970390 1463591 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:31:26.982400 1463591 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 09:31:26.992407 1463591 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 09:31:26.992487 1463591 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 09:31:27.005531 1463591 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 09:31:27.005554 1463591 kubeadm.go:157] found existing configuration files:
	
	I1018 09:31:27.005615 1463591 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 09:31:27.016145 1463591 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 09:31:27.016221 1463591 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 09:31:27.024999 1463591 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 09:31:27.033852 1463591 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 09:31:27.033918 1463591 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 09:31:27.042111 1463591 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 09:31:27.050613 1463591 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 09:31:27.050720 1463591 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 09:31:27.058545 1463591 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 09:31:27.067142 1463591 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 09:31:27.067210 1463591 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 09:31:27.075086 1463591 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 09:31:27.126750 1463591 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 09:31:27.127258 1463591 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 09:31:27.153856 1463591 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 09:31:27.153930 1463591 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 09:31:27.153966 1463591 kubeadm.go:318] OS: Linux
	I1018 09:31:27.154012 1463591 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 09:31:27.154059 1463591 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1018 09:31:27.154106 1463591 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 09:31:27.154154 1463591 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 09:31:27.154202 1463591 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 09:31:27.154249 1463591 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 09:31:27.154294 1463591 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 09:31:27.154346 1463591 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 09:31:27.154392 1463591 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1018 09:31:27.255117 1463591 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 09:31:27.255241 1463591 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 09:31:27.255352 1463591 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 09:31:27.264175 1463591 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 09:31:27.270567 1463591 out.go:252]   - Generating certificates and keys ...
	I1018 09:31:27.270663 1463591 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 09:31:27.270738 1463591 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 09:31:26.566450 1460427 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/client.crt ...
	I1018 09:31:26.566487 1460427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/client.crt: {Name:mk2a5772f32cebb7396bc0df0d3ee4c4fe755ccb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:26.566663 1460427 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/client.key ...
	I1018 09:31:26.566678 1460427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/client.key: {Name:mke244d5e66e31d77db71b2197624d8a44613658 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:26.566757 1460427 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/apiserver.key.8ee16fb5
	I1018 09:31:26.566780 1460427 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/apiserver.crt.8ee16fb5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1018 09:31:27.199954 1460427 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/apiserver.crt.8ee16fb5 ...
	I1018 09:31:27.200029 1460427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/apiserver.crt.8ee16fb5: {Name:mkfac4b739d6c76be938e0b5a15dbfdf9a20107e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:27.200278 1460427 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/apiserver.key.8ee16fb5 ...
	I1018 09:31:27.200314 1460427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/apiserver.key.8ee16fb5: {Name:mk5f01a48171bcee2fb6d9a0ba199bcc6cd067ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:27.200448 1460427 certs.go:382] copying /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/apiserver.crt.8ee16fb5 -> /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/apiserver.crt
	I1018 09:31:27.200580 1460427 certs.go:386] copying /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/apiserver.key.8ee16fb5 -> /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/apiserver.key
	I1018 09:31:27.200668 1460427 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/proxy-client.key
	I1018 09:31:27.200717 1460427 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/proxy-client.crt with IP's: []
	I1018 09:31:28.055736 1460427 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/proxy-client.crt ...
	I1018 09:31:28.055813 1460427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/proxy-client.crt: {Name:mk0aeca0486897ff3af3e77abe7671b9a0fe34cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:28.056063 1460427 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/proxy-client.key ...
	I1018 09:31:28.056099 1460427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/proxy-client.key: {Name:mkad7d04ed994f697270196f3571ba51d597307a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:28.056386 1460427 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097.pem (1338 bytes)
	W1018 09:31:28.056455 1460427 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097_empty.pem, impossibly tiny 0 bytes
	I1018 09:31:28.056481 1460427 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:31:28.056536 1460427 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:31:28.056585 1460427 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:31:28.056640 1460427 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem (1675 bytes)
	I1018 09:31:28.056713 1460427 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem (1708 bytes)
	I1018 09:31:28.057324 1460427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:31:28.084221 1460427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 09:31:28.104464 1460427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:31:28.125410 1460427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:31:28.149130 1460427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 09:31:28.203828 1460427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 09:31:28.224727 1460427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:31:28.244855 1460427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 09:31:28.264028 1460427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097.pem --> /usr/share/ca-certificates/1276097.pem (1338 bytes)
	I1018 09:31:28.290333 1460427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem --> /usr/share/ca-certificates/12760972.pem (1708 bytes)
	I1018 09:31:28.309560 1460427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:31:28.328852 1460427 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:31:28.343095 1460427 ssh_runner.go:195] Run: openssl version
	I1018 09:31:28.350762 1460427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1276097.pem && ln -fs /usr/share/ca-certificates/1276097.pem /etc/ssl/certs/1276097.pem"
	I1018 09:31:28.360216 1460427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1276097.pem
	I1018 09:31:28.364878 1460427 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 08:36 /usr/share/ca-certificates/1276097.pem
	I1018 09:31:28.364994 1460427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1276097.pem
	I1018 09:31:28.417013 1460427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1276097.pem /etc/ssl/certs/51391683.0"
	I1018 09:31:28.426970 1460427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12760972.pem && ln -fs /usr/share/ca-certificates/12760972.pem /etc/ssl/certs/12760972.pem"
	I1018 09:31:28.442725 1460427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12760972.pem
	I1018 09:31:28.447490 1460427 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 08:36 /usr/share/ca-certificates/12760972.pem
	I1018 09:31:28.447602 1460427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12760972.pem
	I1018 09:31:28.498261 1460427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12760972.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:31:28.508123 1460427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:31:28.517416 1460427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:31:28.522170 1460427 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:31:28.522271 1460427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:31:28.567238 1460427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:31:28.576741 1460427 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:31:28.581692 1460427 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 09:31:28.581792 1460427 kubeadm.go:400] StartCluster: {Name:no-preload-886951 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-886951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:31:28.581900 1460427 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:31:28.581987 1460427 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:31:28.615151 1460427 cri.go:89] found id: ""
	I1018 09:31:28.615238 1460427 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:31:28.625530 1460427 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 09:31:28.634592 1460427 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 09:31:28.634675 1460427 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 09:31:28.645140 1460427 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 09:31:28.645160 1460427 kubeadm.go:157] found existing configuration files:
	
	I1018 09:31:28.645237 1460427 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 09:31:28.654676 1460427 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 09:31:28.654764 1460427 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 09:31:28.662912 1460427 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 09:31:28.672270 1460427 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 09:31:28.672365 1460427 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 09:31:28.680227 1460427 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 09:31:28.688678 1460427 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 09:31:28.688771 1460427 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 09:31:28.697053 1460427 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 09:31:28.705997 1460427 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 09:31:28.706084 1460427 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 09:31:28.713904 1460427 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 09:31:28.762222 1460427 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 09:31:28.762654 1460427 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 09:31:28.795277 1460427 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 09:31:28.795402 1460427 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 09:31:28.795459 1460427 kubeadm.go:318] OS: Linux
	I1018 09:31:28.795547 1460427 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 09:31:28.795639 1460427 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1018 09:31:28.795715 1460427 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 09:31:28.795796 1460427 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 09:31:28.795918 1460427 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 09:31:28.795999 1460427 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 09:31:28.796075 1460427 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 09:31:28.796155 1460427 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 09:31:28.796225 1460427 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1018 09:31:28.888730 1460427 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 09:31:28.888849 1460427 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 09:31:28.888958 1460427 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 09:31:28.924431 1460427 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 09:31:28.931042 1460427 out.go:252]   - Generating certificates and keys ...
	I1018 09:31:28.931143 1460427 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 09:31:28.931223 1460427 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 09:31:29.155798 1460427 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 09:31:29.756218 1460427 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 09:31:30.042483 1460427 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 09:31:30.888177 1460427 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 09:31:28.338744 1463591 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 09:31:28.835204 1463591 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 09:31:29.826961 1463591 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 09:31:30.656268 1463591 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 09:31:30.904875 1463591 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 09:31:30.905306 1463591 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-559379 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1018 09:31:31.696221 1463591 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 09:31:31.696356 1463591 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-559379 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1018 09:31:32.371809 1463591 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 09:31:31.136257 1460427 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 09:31:31.136500 1460427 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-886951] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 09:31:31.533601 1460427 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 09:31:31.534192 1460427 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-886951] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 09:31:31.746801 1460427 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 09:31:31.981290 1460427 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 09:31:32.351828 1460427 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 09:31:32.352400 1460427 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 09:31:32.772206 1460427 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 09:31:33.243900 1460427 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 09:31:33.771208 1460427 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 09:31:34.293636 1460427 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 09:31:34.771969 1460427 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 09:31:34.774680 1460427 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 09:31:34.777443 1460427 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 09:31:32.867616 1463591 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 09:31:33.134161 1463591 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 09:31:33.134667 1463591 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 09:31:33.985674 1463591 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 09:31:34.681006 1463591 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 09:31:35.203202 1463591 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 09:31:35.540189 1463591 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 09:31:36.069353 1463591 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 09:31:36.069627 1463591 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 09:31:36.072121 1463591 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 09:31:34.781176 1460427 out.go:252]   - Booting up control plane ...
	I1018 09:31:34.781282 1460427 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 09:31:34.781369 1460427 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 09:31:34.781442 1460427 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 09:31:34.803578 1460427 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 09:31:34.803727 1460427 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 09:31:34.813073 1460427 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 09:31:34.813183 1460427 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 09:31:34.813233 1460427 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 09:31:34.962607 1460427 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 09:31:34.962738 1460427 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 09:31:36.075960 1463591 out.go:252]   - Booting up control plane ...
	I1018 09:31:36.076066 1463591 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 09:31:36.076149 1463591 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 09:31:36.076220 1463591 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 09:31:36.091571 1463591 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 09:31:36.091848 1463591 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 09:31:36.101298 1463591 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 09:31:36.104574 1463591 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 09:31:36.104782 1463591 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 09:31:36.271697 1463591 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 09:31:36.271824 1463591 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 09:31:37.276208 1463591 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001107095s
	I1018 09:31:37.276327 1463591 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 09:31:37.276420 1463591 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1018 09:31:37.276518 1463591 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 09:31:37.276602 1463591 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 09:31:36.464174 1460427 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501478273s
	I1018 09:31:36.469494 1460427 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 09:31:36.471871 1460427 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1018 09:31:36.472190 1460427 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 09:31:36.472904 1460427 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 09:31:44.181077 1460427 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 7.707773681s
	I1018 09:31:45.609353 1460427 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 9.136246295s
	I1018 09:31:46.974440 1460427 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 10.501878222s
	I1018 09:31:46.999821 1460427 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 09:31:47.021741 1460427 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 09:31:47.040531 1460427 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 09:31:47.041011 1460427 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-886951 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 09:31:47.054179 1460427 kubeadm.go:318] [bootstrap-token] Using token: xl5rux.hbav811ws9ec54me
	I1018 09:31:43.908431 1463591 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 6.631335162s
	I1018 09:31:46.431467 1463591 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 9.155611885s
	I1018 09:31:47.777496 1463591 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 10.501453625s
	I1018 09:31:47.805508 1463591 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 09:31:47.824941 1463591 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 09:31:47.839202 1463591 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 09:31:47.839413 1463591 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-559379 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 09:31:47.858156 1463591 kubeadm.go:318] [bootstrap-token] Using token: o5ed9d.aqt2njxfugjsgyjh
	I1018 09:31:47.057149 1460427 out.go:252]   - Configuring RBAC rules ...
	I1018 09:31:47.057352 1460427 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 09:31:47.064366 1460427 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 09:31:47.076233 1460427 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 09:31:47.082335 1460427 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 09:31:47.087609 1460427 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 09:31:47.097130 1460427 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 09:31:47.384392 1460427 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 09:31:47.832495 1460427 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 09:31:48.382005 1460427 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 09:31:48.383128 1460427 kubeadm.go:318] 
	I1018 09:31:48.383215 1460427 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 09:31:48.383222 1460427 kubeadm.go:318] 
	I1018 09:31:48.383306 1460427 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 09:31:48.383312 1460427 kubeadm.go:318] 
	I1018 09:31:48.383339 1460427 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 09:31:48.383405 1460427 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 09:31:48.383460 1460427 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 09:31:48.383464 1460427 kubeadm.go:318] 
	I1018 09:31:48.383521 1460427 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 09:31:48.383525 1460427 kubeadm.go:318] 
	I1018 09:31:48.383575 1460427 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 09:31:48.383579 1460427 kubeadm.go:318] 
	I1018 09:31:48.383633 1460427 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 09:31:48.383711 1460427 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 09:31:48.383783 1460427 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 09:31:48.383787 1460427 kubeadm.go:318] 
	I1018 09:31:48.383905 1460427 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 09:31:48.383986 1460427 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 09:31:48.383991 1460427 kubeadm.go:318] 
	I1018 09:31:48.384078 1460427 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token xl5rux.hbav811ws9ec54me \
	I1018 09:31:48.384186 1460427 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:96f90b4cb8c73febdc9c05bdeac126d36ffd51807995d3aad044a9ec3e33b953 \
	I1018 09:31:48.384207 1460427 kubeadm.go:318] 	--control-plane 
	I1018 09:31:48.384212 1460427 kubeadm.go:318] 
	I1018 09:31:48.384300 1460427 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 09:31:48.384304 1460427 kubeadm.go:318] 
	I1018 09:31:48.384389 1460427 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token xl5rux.hbav811ws9ec54me \
	I1018 09:31:48.384496 1460427 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:96f90b4cb8c73febdc9c05bdeac126d36ffd51807995d3aad044a9ec3e33b953 
	I1018 09:31:48.388174 1460427 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1018 09:31:48.388448 1460427 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1018 09:31:48.388573 1460427 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 09:31:48.388599 1460427 cni.go:84] Creating CNI manager for ""
	I1018 09:31:48.388611 1460427 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:31:48.391931 1460427 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 09:31:47.861299 1463591 out.go:252]   - Configuring RBAC rules ...
	I1018 09:31:47.861430 1463591 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 09:31:47.869408 1463591 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 09:31:47.882561 1463591 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 09:31:47.891949 1463591 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 09:31:47.900019 1463591 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 09:31:47.907959 1463591 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 09:31:48.184947 1463591 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 09:31:48.736338 1463591 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 09:31:49.186199 1463591 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 09:31:49.187564 1463591 kubeadm.go:318] 
	I1018 09:31:49.187641 1463591 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 09:31:49.187648 1463591 kubeadm.go:318] 
	I1018 09:31:49.187734 1463591 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 09:31:49.187739 1463591 kubeadm.go:318] 
	I1018 09:31:49.187766 1463591 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 09:31:49.187897 1463591 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 09:31:49.187952 1463591 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 09:31:49.187956 1463591 kubeadm.go:318] 
	I1018 09:31:49.188012 1463591 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 09:31:49.188017 1463591 kubeadm.go:318] 
	I1018 09:31:49.188066 1463591 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 09:31:49.188071 1463591 kubeadm.go:318] 
	I1018 09:31:49.188125 1463591 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 09:31:49.188203 1463591 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 09:31:49.188274 1463591 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 09:31:49.188279 1463591 kubeadm.go:318] 
	I1018 09:31:49.188366 1463591 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 09:31:49.188446 1463591 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 09:31:49.188451 1463591 kubeadm.go:318] 
	I1018 09:31:49.188563 1463591 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token o5ed9d.aqt2njxfugjsgyjh \
	I1018 09:31:49.188672 1463591 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:96f90b4cb8c73febdc9c05bdeac126d36ffd51807995d3aad044a9ec3e33b953 \
	I1018 09:31:49.188693 1463591 kubeadm.go:318] 	--control-plane 
	I1018 09:31:49.188697 1463591 kubeadm.go:318] 
	I1018 09:31:49.188786 1463591 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 09:31:49.188790 1463591 kubeadm.go:318] 
	I1018 09:31:49.188875 1463591 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token o5ed9d.aqt2njxfugjsgyjh \
	I1018 09:31:49.188991 1463591 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:96f90b4cb8c73febdc9c05bdeac126d36ffd51807995d3aad044a9ec3e33b953 
	I1018 09:31:49.193906 1463591 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1018 09:31:49.194143 1463591 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1018 09:31:49.194253 1463591 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 09:31:49.194269 1463591 cni.go:84] Creating CNI manager for ""
	I1018 09:31:49.194276 1463591 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:31:49.197461 1463591 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 09:31:48.394952 1460427 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 09:31:48.400723 1460427 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 09:31:48.400745 1460427 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 09:31:48.416106 1460427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 09:31:48.931082 1460427 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 09:31:48.931238 1460427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:31:48.931319 1460427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-886951 minikube.k8s.io/updated_at=2025_10_18T09_31_48_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820 minikube.k8s.io/name=no-preload-886951 minikube.k8s.io/primary=true
	I1018 09:31:49.263515 1460427 ops.go:34] apiserver oom_adj: -16
	I1018 09:31:49.263626 1460427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:31:49.764693 1460427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:31:50.264553 1460427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:31:50.763760 1460427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:31:49.200652 1463591 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 09:31:49.205007 1463591 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 09:31:49.205071 1463591 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 09:31:49.219591 1463591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 09:31:49.595588 1463591 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 09:31:49.595733 1463591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:31:49.595810 1463591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-559379 minikube.k8s.io/updated_at=2025_10_18T09_31_49_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820 minikube.k8s.io/name=embed-certs-559379 minikube.k8s.io/primary=true
	I1018 09:31:49.612375 1463591 ops.go:34] apiserver oom_adj: -16
	I1018 09:31:49.826887 1463591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:31:50.327288 1463591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:31:50.827791 1463591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:31:51.327066 1463591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:31:51.827761 1463591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:31:52.327121 1463591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:31:51.263929 1460427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:31:51.763751 1460427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:31:52.263973 1460427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:31:52.764238 1460427 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:31:53.011079 1460427 kubeadm.go:1113] duration metric: took 4.079889644s to wait for elevateKubeSystemPrivileges
	I1018 09:31:53.011110 1460427 kubeadm.go:402] duration metric: took 24.429323772s to StartCluster
	I1018 09:31:53.011129 1460427 settings.go:142] acquiring lock: {Name:mk4240955247baece9b9f2d9a5a8e189cb089184 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:53.011200 1460427 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 09:31:53.011950 1460427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/kubeconfig: {Name:mk4be33efdd3c492a070116d2c0b6f8b234870aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:53.012200 1460427 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:31:53.012316 1460427 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 09:31:53.012591 1460427 config.go:182] Loaded profile config "no-preload-886951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:31:53.012656 1460427 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:31:53.012726 1460427 addons.go:69] Setting storage-provisioner=true in profile "no-preload-886951"
	I1018 09:31:53.012749 1460427 addons.go:238] Setting addon storage-provisioner=true in "no-preload-886951"
	I1018 09:31:53.012776 1460427 host.go:66] Checking if "no-preload-886951" exists ...
	I1018 09:31:53.013301 1460427 cli_runner.go:164] Run: docker container inspect no-preload-886951 --format={{.State.Status}}
	I1018 09:31:53.014564 1460427 addons.go:69] Setting default-storageclass=true in profile "no-preload-886951"
	I1018 09:31:53.014589 1460427 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-886951"
	I1018 09:31:53.014912 1460427 cli_runner.go:164] Run: docker container inspect no-preload-886951 --format={{.State.Status}}
	I1018 09:31:53.022492 1460427 out.go:179] * Verifying Kubernetes components...
	I1018 09:31:53.032005 1460427 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:31:53.054920 1460427 addons.go:238] Setting addon default-storageclass=true in "no-preload-886951"
	I1018 09:31:53.054960 1460427 host.go:66] Checking if "no-preload-886951" exists ...
	I1018 09:31:53.055491 1460427 cli_runner.go:164] Run: docker container inspect no-preload-886951 --format={{.State.Status}}
	I1018 09:31:53.077413 1460427 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:31:52.827123 1463591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:31:53.327266 1463591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:31:53.827381 1463591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:31:54.326965 1463591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:31:54.676297 1463591 kubeadm.go:1113] duration metric: took 5.080598362s to wait for elevateKubeSystemPrivileges
	I1018 09:31:54.676325 1463591 kubeadm.go:402] duration metric: took 27.762129329s to StartCluster
	I1018 09:31:54.676343 1463591 settings.go:142] acquiring lock: {Name:mk4240955247baece9b9f2d9a5a8e189cb089184 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:54.676402 1463591 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 09:31:54.677753 1463591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/kubeconfig: {Name:mk4be33efdd3c492a070116d2c0b6f8b234870aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:31:54.677981 1463591 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:31:54.678113 1463591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 09:31:54.678367 1463591 config.go:182] Loaded profile config "embed-certs-559379": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:31:54.678317 1463591 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:31:54.678396 1463591 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-559379"
	I1018 09:31:54.678403 1463591 addons.go:69] Setting default-storageclass=true in profile "embed-certs-559379"
	I1018 09:31:54.678411 1463591 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-559379"
	I1018 09:31:54.678415 1463591 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-559379"
	I1018 09:31:54.678435 1463591 host.go:66] Checking if "embed-certs-559379" exists ...
	I1018 09:31:54.678724 1463591 cli_runner.go:164] Run: docker container inspect embed-certs-559379 --format={{.State.Status}}
	I1018 09:31:54.678859 1463591 cli_runner.go:164] Run: docker container inspect embed-certs-559379 --format={{.State.Status}}
	I1018 09:31:54.694778 1463591 out.go:179] * Verifying Kubernetes components...
	I1018 09:31:54.704070 1463591 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:31:54.723992 1463591 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:31:53.086750 1460427 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:31:53.086771 1460427 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:31:53.086836 1460427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-886951
	I1018 09:31:53.091182 1460427 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:31:53.091208 1460427 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:31:53.091266 1460427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-886951
	I1018 09:31:53.135014 1460427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34881 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/no-preload-886951/id_rsa Username:docker}
	I1018 09:31:53.143462 1460427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34881 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/no-preload-886951/id_rsa Username:docker}
	I1018 09:31:53.528497 1460427 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 09:31:53.528608 1460427 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:31:53.582062 1460427 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:31:53.814927 1460427 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:31:55.023037 1460427 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.494396569s)
	I1018 09:31:55.023277 1460427 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.494753252s)
	I1018 09:31:55.023307 1460427 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1018 09:31:55.026757 1460427 node_ready.go:35] waiting up to 6m0s for node "no-preload-886951" to be "Ready" ...
	I1018 09:31:55.529098 1460427 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-886951" context rescaled to 1 replicas
	I1018 09:31:55.703797 1460427 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.121650849s)
	I1018 09:31:55.703892 1460427 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.888942219s)
	I1018 09:31:55.749183 1460427 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1018 09:31:55.752258 1460427 addons.go:514] duration metric: took 2.739587908s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 09:31:54.726901 1463591 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:31:54.726925 1463591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:31:54.726994 1463591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-559379
	I1018 09:31:54.728564 1463591 addons.go:238] Setting addon default-storageclass=true in "embed-certs-559379"
	I1018 09:31:54.728606 1463591 host.go:66] Checking if "embed-certs-559379" exists ...
	I1018 09:31:54.729044 1463591 cli_runner.go:164] Run: docker container inspect embed-certs-559379 --format={{.State.Status}}
	I1018 09:31:54.773696 1463591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34886 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/embed-certs-559379/id_rsa Username:docker}
	I1018 09:31:54.787273 1463591 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:31:54.787295 1463591 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:31:54.787359 1463591 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-559379
	I1018 09:31:54.821649 1463591 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34886 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/embed-certs-559379/id_rsa Username:docker}
	I1018 09:31:55.579704 1463591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 09:31:55.579898 1463591 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:31:55.607297 1463591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:31:55.672671 1463591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:31:56.650225 1463591 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.070272156s)
	I1018 09:31:56.650521 1463591 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.070745661s)
	I1018 09:31:56.650562 1463591 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1018 09:31:56.652718 1463591 node_ready.go:35] waiting up to 6m0s for node "embed-certs-559379" to be "Ready" ...
	I1018 09:31:56.993908 1463591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.386534324s)
	I1018 09:31:56.993952 1463591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.321211934s)
	I1018 09:31:57.021162 1463591 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1018 09:31:57.024153 1463591 addons.go:514] duration metric: took 2.34582676s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 09:31:57.158601 1463591 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-559379" context rescaled to 1 replicas
	W1018 09:31:57.029910 1460427 node_ready.go:57] node "no-preload-886951" has "Ready":"False" status (will retry)
	W1018 09:31:59.032847 1460427 node_ready.go:57] node "no-preload-886951" has "Ready":"False" status (will retry)
	W1018 09:31:58.656598 1463591 node_ready.go:57] node "embed-certs-559379" has "Ready":"False" status (will retry)
	W1018 09:32:01.157891 1463591 node_ready.go:57] node "embed-certs-559379" has "Ready":"False" status (will retry)
	W1018 09:32:01.529750 1460427 node_ready.go:57] node "no-preload-886951" has "Ready":"False" status (will retry)
	W1018 09:32:03.529999 1460427 node_ready.go:57] node "no-preload-886951" has "Ready":"False" status (will retry)
	W1018 09:32:05.533962 1460427 node_ready.go:57] node "no-preload-886951" has "Ready":"False" status (will retry)
	W1018 09:32:03.656854 1463591 node_ready.go:57] node "embed-certs-559379" has "Ready":"False" status (will retry)
	W1018 09:32:06.156641 1463591 node_ready.go:57] node "embed-certs-559379" has "Ready":"False" status (will retry)
	W1018 09:32:08.030290 1460427 node_ready.go:57] node "no-preload-886951" has "Ready":"False" status (will retry)
	I1018 09:32:09.030690 1460427 node_ready.go:49] node "no-preload-886951" is "Ready"
	I1018 09:32:09.030724 1460427 node_ready.go:38] duration metric: took 14.0038192s for node "no-preload-886951" to be "Ready" ...
	I1018 09:32:09.030737 1460427 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:32:09.030820 1460427 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:32:09.065011 1460427 api_server.go:72] duration metric: took 16.052772774s to wait for apiserver process to appear ...
	I1018 09:32:09.065035 1460427 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:32:09.065055 1460427 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:32:09.080560 1460427 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1018 09:32:09.085695 1460427 api_server.go:141] control plane version: v1.34.1
	I1018 09:32:09.085728 1460427 api_server.go:131] duration metric: took 20.68607ms to wait for apiserver health ...
	I1018 09:32:09.085737 1460427 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:32:09.091446 1460427 system_pods.go:59] 8 kube-system pods found
	I1018 09:32:09.091492 1460427 system_pods.go:61] "coredns-66bc5c9577-l2rmq" [d42892c4-28ad-40e9-bef7-edbaf3096efe] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:32:09.091498 1460427 system_pods.go:61] "etcd-no-preload-886951" [d98ac5f9-85a9-42b9-8400-7faa88209769] Running
	I1018 09:32:09.091505 1460427 system_pods.go:61] "kindnet-l4xmh" [bd6c0a30-1f76-4b98-99f8-a0f3bdf4c4de] Running
	I1018 09:32:09.091509 1460427 system_pods.go:61] "kube-apiserver-no-preload-886951" [6d09816a-4450-453e-bba5-a83623cf0117] Running
	I1018 09:32:09.091514 1460427 system_pods.go:61] "kube-controller-manager-no-preload-886951" [358971ea-822e-4710-aefb-b9eca4fb2e54] Running
	I1018 09:32:09.091518 1460427 system_pods.go:61] "kube-proxy-4gbs9" [fcd20dac-c8b3-447a-9e13-0793b197fe69] Running
	I1018 09:32:09.091523 1460427 system_pods.go:61] "kube-scheduler-no-preload-886951" [2b660ad7-873b-403d-9622-653cf24afd79] Running
	I1018 09:32:09.091529 1460427 system_pods.go:61] "storage-provisioner" [16c0c41e-8ef0-457f-af9d-4f27c563f4a8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:32:09.091536 1460427 system_pods.go:74] duration metric: took 5.79252ms to wait for pod list to return data ...
	I1018 09:32:09.091545 1460427 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:32:09.094302 1460427 default_sa.go:45] found service account: "default"
	I1018 09:32:09.094340 1460427 default_sa.go:55] duration metric: took 2.787426ms for default service account to be created ...
	I1018 09:32:09.094349 1460427 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 09:32:09.097857 1460427 system_pods.go:86] 8 kube-system pods found
	I1018 09:32:09.097895 1460427 system_pods.go:89] "coredns-66bc5c9577-l2rmq" [d42892c4-28ad-40e9-bef7-edbaf3096efe] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:32:09.097903 1460427 system_pods.go:89] "etcd-no-preload-886951" [d98ac5f9-85a9-42b9-8400-7faa88209769] Running
	I1018 09:32:09.097909 1460427 system_pods.go:89] "kindnet-l4xmh" [bd6c0a30-1f76-4b98-99f8-a0f3bdf4c4de] Running
	I1018 09:32:09.097914 1460427 system_pods.go:89] "kube-apiserver-no-preload-886951" [6d09816a-4450-453e-bba5-a83623cf0117] Running
	I1018 09:32:09.097919 1460427 system_pods.go:89] "kube-controller-manager-no-preload-886951" [358971ea-822e-4710-aefb-b9eca4fb2e54] Running
	I1018 09:32:09.097924 1460427 system_pods.go:89] "kube-proxy-4gbs9" [fcd20dac-c8b3-447a-9e13-0793b197fe69] Running
	I1018 09:32:09.097928 1460427 system_pods.go:89] "kube-scheduler-no-preload-886951" [2b660ad7-873b-403d-9622-653cf24afd79] Running
	I1018 09:32:09.097935 1460427 system_pods.go:89] "storage-provisioner" [16c0c41e-8ef0-457f-af9d-4f27c563f4a8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:32:09.097956 1460427 retry.go:31] will retry after 247.440083ms: missing components: kube-dns
	I1018 09:32:09.350281 1460427 system_pods.go:86] 8 kube-system pods found
	I1018 09:32:09.350366 1460427 system_pods.go:89] "coredns-66bc5c9577-l2rmq" [d42892c4-28ad-40e9-bef7-edbaf3096efe] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:32:09.350389 1460427 system_pods.go:89] "etcd-no-preload-886951" [d98ac5f9-85a9-42b9-8400-7faa88209769] Running
	I1018 09:32:09.350409 1460427 system_pods.go:89] "kindnet-l4xmh" [bd6c0a30-1f76-4b98-99f8-a0f3bdf4c4de] Running
	I1018 09:32:09.350441 1460427 system_pods.go:89] "kube-apiserver-no-preload-886951" [6d09816a-4450-453e-bba5-a83623cf0117] Running
	I1018 09:32:09.350464 1460427 system_pods.go:89] "kube-controller-manager-no-preload-886951" [358971ea-822e-4710-aefb-b9eca4fb2e54] Running
	I1018 09:32:09.350486 1460427 system_pods.go:89] "kube-proxy-4gbs9" [fcd20dac-c8b3-447a-9e13-0793b197fe69] Running
	I1018 09:32:09.350547 1460427 system_pods.go:89] "kube-scheduler-no-preload-886951" [2b660ad7-873b-403d-9622-653cf24afd79] Running
	I1018 09:32:09.350576 1460427 system_pods.go:89] "storage-provisioner" [16c0c41e-8ef0-457f-af9d-4f27c563f4a8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:32:09.350608 1460427 retry.go:31] will retry after 246.818058ms: missing components: kube-dns
	I1018 09:32:09.601936 1460427 system_pods.go:86] 8 kube-system pods found
	I1018 09:32:09.601971 1460427 system_pods.go:89] "coredns-66bc5c9577-l2rmq" [d42892c4-28ad-40e9-bef7-edbaf3096efe] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:32:09.601978 1460427 system_pods.go:89] "etcd-no-preload-886951" [d98ac5f9-85a9-42b9-8400-7faa88209769] Running
	I1018 09:32:09.601985 1460427 system_pods.go:89] "kindnet-l4xmh" [bd6c0a30-1f76-4b98-99f8-a0f3bdf4c4de] Running
	I1018 09:32:09.601989 1460427 system_pods.go:89] "kube-apiserver-no-preload-886951" [6d09816a-4450-453e-bba5-a83623cf0117] Running
	I1018 09:32:09.601997 1460427 system_pods.go:89] "kube-controller-manager-no-preload-886951" [358971ea-822e-4710-aefb-b9eca4fb2e54] Running
	I1018 09:32:09.602001 1460427 system_pods.go:89] "kube-proxy-4gbs9" [fcd20dac-c8b3-447a-9e13-0793b197fe69] Running
	I1018 09:32:09.602005 1460427 system_pods.go:89] "kube-scheduler-no-preload-886951" [2b660ad7-873b-403d-9622-653cf24afd79] Running
	I1018 09:32:09.602010 1460427 system_pods.go:89] "storage-provisioner" [16c0c41e-8ef0-457f-af9d-4f27c563f4a8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:32:09.602025 1460427 retry.go:31] will retry after 440.019692ms: missing components: kube-dns
	I1018 09:32:10.046497 1460427 system_pods.go:86] 8 kube-system pods found
	I1018 09:32:10.046532 1460427 system_pods.go:89] "coredns-66bc5c9577-l2rmq" [d42892c4-28ad-40e9-bef7-edbaf3096efe] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:32:10.046541 1460427 system_pods.go:89] "etcd-no-preload-886951" [d98ac5f9-85a9-42b9-8400-7faa88209769] Running
	I1018 09:32:10.046548 1460427 system_pods.go:89] "kindnet-l4xmh" [bd6c0a30-1f76-4b98-99f8-a0f3bdf4c4de] Running
	I1018 09:32:10.046553 1460427 system_pods.go:89] "kube-apiserver-no-preload-886951" [6d09816a-4450-453e-bba5-a83623cf0117] Running
	I1018 09:32:10.046586 1460427 system_pods.go:89] "kube-controller-manager-no-preload-886951" [358971ea-822e-4710-aefb-b9eca4fb2e54] Running
	I1018 09:32:10.046591 1460427 system_pods.go:89] "kube-proxy-4gbs9" [fcd20dac-c8b3-447a-9e13-0793b197fe69] Running
	I1018 09:32:10.046595 1460427 system_pods.go:89] "kube-scheduler-no-preload-886951" [2b660ad7-873b-403d-9622-653cf24afd79] Running
	I1018 09:32:10.046609 1460427 system_pods.go:89] "storage-provisioner" [16c0c41e-8ef0-457f-af9d-4f27c563f4a8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:32:10.046625 1460427 retry.go:31] will retry after 487.747614ms: missing components: kube-dns
	I1018 09:32:10.539060 1460427 system_pods.go:86] 8 kube-system pods found
	I1018 09:32:10.539092 1460427 system_pods.go:89] "coredns-66bc5c9577-l2rmq" [d42892c4-28ad-40e9-bef7-edbaf3096efe] Running
	I1018 09:32:10.539099 1460427 system_pods.go:89] "etcd-no-preload-886951" [d98ac5f9-85a9-42b9-8400-7faa88209769] Running
	I1018 09:32:10.539103 1460427 system_pods.go:89] "kindnet-l4xmh" [bd6c0a30-1f76-4b98-99f8-a0f3bdf4c4de] Running
	I1018 09:32:10.539107 1460427 system_pods.go:89] "kube-apiserver-no-preload-886951" [6d09816a-4450-453e-bba5-a83623cf0117] Running
	I1018 09:32:10.539113 1460427 system_pods.go:89] "kube-controller-manager-no-preload-886951" [358971ea-822e-4710-aefb-b9eca4fb2e54] Running
	I1018 09:32:10.539119 1460427 system_pods.go:89] "kube-proxy-4gbs9" [fcd20dac-c8b3-447a-9e13-0793b197fe69] Running
	I1018 09:32:10.539123 1460427 system_pods.go:89] "kube-scheduler-no-preload-886951" [2b660ad7-873b-403d-9622-653cf24afd79] Running
	I1018 09:32:10.539127 1460427 system_pods.go:89] "storage-provisioner" [16c0c41e-8ef0-457f-af9d-4f27c563f4a8] Running
	I1018 09:32:10.539134 1460427 system_pods.go:126] duration metric: took 1.444780452s to wait for k8s-apps to be running ...
	I1018 09:32:10.539146 1460427 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 09:32:10.539212 1460427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:32:10.554879 1460427 system_svc.go:56] duration metric: took 15.722221ms WaitForService to wait for kubelet
	I1018 09:32:10.554905 1460427 kubeadm.go:586] duration metric: took 17.54267357s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:32:10.554924 1460427 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:32:10.557808 1460427 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 09:32:10.557839 1460427 node_conditions.go:123] node cpu capacity is 2
	I1018 09:32:10.557852 1460427 node_conditions.go:105] duration metric: took 2.922323ms to run NodePressure ...
	I1018 09:32:10.557865 1460427 start.go:241] waiting for startup goroutines ...
	I1018 09:32:10.557898 1460427 start.go:246] waiting for cluster config update ...
	I1018 09:32:10.557910 1460427 start.go:255] writing updated cluster config ...
	I1018 09:32:10.558216 1460427 ssh_runner.go:195] Run: rm -f paused
	I1018 09:32:10.562525 1460427 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:32:10.566606 1460427 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-l2rmq" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:32:10.571559 1460427 pod_ready.go:94] pod "coredns-66bc5c9577-l2rmq" is "Ready"
	I1018 09:32:10.571630 1460427 pod_ready.go:86] duration metric: took 4.995151ms for pod "coredns-66bc5c9577-l2rmq" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:32:10.574000 1460427 pod_ready.go:83] waiting for pod "etcd-no-preload-886951" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:32:10.578621 1460427 pod_ready.go:94] pod "etcd-no-preload-886951" is "Ready"
	I1018 09:32:10.578649 1460427 pod_ready.go:86] duration metric: took 4.623116ms for pod "etcd-no-preload-886951" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:32:10.580963 1460427 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-886951" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:32:10.585571 1460427 pod_ready.go:94] pod "kube-apiserver-no-preload-886951" is "Ready"
	I1018 09:32:10.585606 1460427 pod_ready.go:86] duration metric: took 4.618932ms for pod "kube-apiserver-no-preload-886951" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:32:10.587883 1460427 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-886951" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:32:10.967022 1460427 pod_ready.go:94] pod "kube-controller-manager-no-preload-886951" is "Ready"
	I1018 09:32:10.967049 1460427 pod_ready.go:86] duration metric: took 379.142705ms for pod "kube-controller-manager-no-preload-886951" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:32:11.166826 1460427 pod_ready.go:83] waiting for pod "kube-proxy-4gbs9" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:32:11.566925 1460427 pod_ready.go:94] pod "kube-proxy-4gbs9" is "Ready"
	I1018 09:32:11.566952 1460427 pod_ready.go:86] duration metric: took 400.101794ms for pod "kube-proxy-4gbs9" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:32:11.767183 1460427 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-886951" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:32:12.167090 1460427 pod_ready.go:94] pod "kube-scheduler-no-preload-886951" is "Ready"
	I1018 09:32:12.167119 1460427 pod_ready.go:86] duration metric: took 399.910734ms for pod "kube-scheduler-no-preload-886951" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:32:12.167133 1460427 pod_ready.go:40] duration metric: took 1.604577192s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:32:12.223108 1460427 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 09:32:12.226353 1460427 out.go:179] * Done! kubectl is now configured to use "no-preload-886951" cluster and "default" namespace by default
	W1018 09:32:08.656228 1463591 node_ready.go:57] node "embed-certs-559379" has "Ready":"False" status (will retry)
	W1018 09:32:10.656954 1463591 node_ready.go:57] node "embed-certs-559379" has "Ready":"False" status (will retry)
	W1018 09:32:13.155302 1463591 node_ready.go:57] node "embed-certs-559379" has "Ready":"False" status (will retry)
	W1018 09:32:15.156241 1463591 node_ready.go:57] node "embed-certs-559379" has "Ready":"False" status (will retry)
	W1018 09:32:17.655418 1463591 node_ready.go:57] node "embed-certs-559379" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 18 09:32:09 no-preload-886951 crio[841]: time="2025-10-18T09:32:09.431253769Z" level=info msg="Created container 3fb26713f1064567a368cd26776d784a7772f75274aa4b5bc8711fcf4b98e000: kube-system/coredns-66bc5c9577-l2rmq/coredns" id=06cffa27-c49f-44da-9467-2b833f10e07b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:32:09 no-preload-886951 crio[841]: time="2025-10-18T09:32:09.433319246Z" level=info msg="Starting container: 3fb26713f1064567a368cd26776d784a7772f75274aa4b5bc8711fcf4b98e000" id=4d6df4a4-aeb1-4947-b875-10893463bf35 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:32:09 no-preload-886951 crio[841]: time="2025-10-18T09:32:09.435176966Z" level=info msg="Started container" PID=2488 containerID=3fb26713f1064567a368cd26776d784a7772f75274aa4b5bc8711fcf4b98e000 description=kube-system/coredns-66bc5c9577-l2rmq/coredns id=4d6df4a4-aeb1-4947-b875-10893463bf35 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6ba81e76f8dbd813f462e91d62745d1b2d719fb47502e98630aeef4434051078
	Oct 18 09:32:12 no-preload-886951 crio[841]: time="2025-10-18T09:32:12.738488392Z" level=info msg="Running pod sandbox: default/busybox/POD" id=6a4ba521-8899-415d-9c2b-ab586f16fbd2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:32:12 no-preload-886951 crio[841]: time="2025-10-18T09:32:12.738575594Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:32:12 no-preload-886951 crio[841]: time="2025-10-18T09:32:12.743988645Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:0f9779d07f13cd8d5842a2da37b1240de8316ba5f83321f516a23b76b49786b8 UID:fb073a56-3a60-4f54-b138-9cee2302d24a NetNS:/var/run/netns/9e571e27-a3dc-44b0-b41d-eca9da07f523 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40015fc9c0}] Aliases:map[]}"
	Oct 18 09:32:12 no-preload-886951 crio[841]: time="2025-10-18T09:32:12.744027914Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 18 09:32:12 no-preload-886951 crio[841]: time="2025-10-18T09:32:12.755511787Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:0f9779d07f13cd8d5842a2da37b1240de8316ba5f83321f516a23b76b49786b8 UID:fb073a56-3a60-4f54-b138-9cee2302d24a NetNS:/var/run/netns/9e571e27-a3dc-44b0-b41d-eca9da07f523 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40015fc9c0}] Aliases:map[]}"
	Oct 18 09:32:12 no-preload-886951 crio[841]: time="2025-10-18T09:32:12.75567376Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 18 09:32:12 no-preload-886951 crio[841]: time="2025-10-18T09:32:12.758690701Z" level=info msg="Ran pod sandbox 0f9779d07f13cd8d5842a2da37b1240de8316ba5f83321f516a23b76b49786b8 with infra container: default/busybox/POD" id=6a4ba521-8899-415d-9c2b-ab586f16fbd2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:32:12 no-preload-886951 crio[841]: time="2025-10-18T09:32:12.76100989Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=881f5908-4420-4274-864a-5c98a6272954 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:32:12 no-preload-886951 crio[841]: time="2025-10-18T09:32:12.761165545Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=881f5908-4420-4274-864a-5c98a6272954 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:32:12 no-preload-886951 crio[841]: time="2025-10-18T09:32:12.761230298Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=881f5908-4420-4274-864a-5c98a6272954 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:32:12 no-preload-886951 crio[841]: time="2025-10-18T09:32:12.763692564Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=12939480-5ee6-4b3a-a82c-f79627ed862d name=/runtime.v1.ImageService/PullImage
	Oct 18 09:32:12 no-preload-886951 crio[841]: time="2025-10-18T09:32:12.765941035Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 18 09:32:15 no-preload-886951 crio[841]: time="2025-10-18T09:32:15.140864052Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=12939480-5ee6-4b3a-a82c-f79627ed862d name=/runtime.v1.ImageService/PullImage
	Oct 18 09:32:15 no-preload-886951 crio[841]: time="2025-10-18T09:32:15.142987414Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=31b1e3b3-e961-4f73-a32f-f6d7c9cfd8b4 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:32:15 no-preload-886951 crio[841]: time="2025-10-18T09:32:15.146238081Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8e809a70-c2a9-439c-9152-2f9ef739a16d name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:32:15 no-preload-886951 crio[841]: time="2025-10-18T09:32:15.151908503Z" level=info msg="Creating container: default/busybox/busybox" id=39877477-ed1d-44e6-9087-c22d86d859be name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:32:15 no-preload-886951 crio[841]: time="2025-10-18T09:32:15.152650924Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:32:15 no-preload-886951 crio[841]: time="2025-10-18T09:32:15.158835999Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:32:15 no-preload-886951 crio[841]: time="2025-10-18T09:32:15.159331419Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:32:15 no-preload-886951 crio[841]: time="2025-10-18T09:32:15.17500057Z" level=info msg="Created container 81caeb13502823d2f4659cd5d769b7a8056eb76d62a5d9fa12bfca7c80e9547a: default/busybox/busybox" id=39877477-ed1d-44e6-9087-c22d86d859be name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:32:15 no-preload-886951 crio[841]: time="2025-10-18T09:32:15.176111431Z" level=info msg="Starting container: 81caeb13502823d2f4659cd5d769b7a8056eb76d62a5d9fa12bfca7c80e9547a" id=9260330d-5cfc-4d82-9040-f06e4dd423ca name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:32:15 no-preload-886951 crio[841]: time="2025-10-18T09:32:15.178750759Z" level=info msg="Started container" PID=2539 containerID=81caeb13502823d2f4659cd5d769b7a8056eb76d62a5d9fa12bfca7c80e9547a description=default/busybox/busybox id=9260330d-5cfc-4d82-9040-f06e4dd423ca name=/runtime.v1.RuntimeService/StartContainer sandboxID=0f9779d07f13cd8d5842a2da37b1240de8316ba5f83321f516a23b76b49786b8
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	81caeb1350282       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   6 seconds ago       Running             busybox                   0                   0f9779d07f13c       busybox                                     default
	3fb26713f1064       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago      Running             coredns                   0                   6ba81e76f8dbd       coredns-66bc5c9577-l2rmq                    kube-system
	bc6b0dc2b8294       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      12 seconds ago      Running             storage-provisioner       0                   fdb987e80af8c       storage-provisioner                         kube-system
	873bd6491103c       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    23 seconds ago      Running             kindnet-cni               0                   c4b6fa1ceb7a0       kindnet-l4xmh                               kube-system
	f4b399b4834ab       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      28 seconds ago      Running             kube-proxy                0                   41702904bdff9       kube-proxy-4gbs9                            kube-system
	0c1466055ee56       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      44 seconds ago      Running             kube-apiserver            0                   7801396e45e45       kube-apiserver-no-preload-886951            kube-system
	2951623eaf4e8       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      44 seconds ago      Running             kube-scheduler            0                   31ce0e943fd3f       kube-scheduler-no-preload-886951            kube-system
	c7e163158b413       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      44 seconds ago      Running             kube-controller-manager   0                   addecc58c7605       kube-controller-manager-no-preload-886951   kube-system
	21d07d52cc86f       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      45 seconds ago      Running             etcd                      0                   2eab39962756f       etcd-no-preload-886951                      kube-system
	
	
	==> coredns [3fb26713f1064567a368cd26776d784a7772f75274aa4b5bc8711fcf4b98e000] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48833 - 20395 "HINFO IN 1268197820185719924.7702317285308419377. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013104587s
	
	
	==> describe nodes <==
	Name:               no-preload-886951
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-886951
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820
	                    minikube.k8s.io/name=no-preload-886951
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_31_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:31:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-886951
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:32:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:32:18 +0000   Sat, 18 Oct 2025 09:31:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:32:18 +0000   Sat, 18 Oct 2025 09:31:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:32:18 +0000   Sat, 18 Oct 2025 09:31:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:32:18 +0000   Sat, 18 Oct 2025 09:32:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-886951
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                637092e3-28b4-4cc7-8dae-a07e30854491
	  Boot ID:                    65523ab2-bf15-4ba4-9086-da57024e96a9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-l2rmq                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     30s
	  kube-system                 etcd-no-preload-886951                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         34s
	  kube-system                 kindnet-l4xmh                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-no-preload-886951             250m (12%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-no-preload-886951    200m (10%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-4gbs9                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-no-preload-886951             100m (5%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 27s                kube-proxy       
	  Warning  CgroupV1                 46s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  46s (x8 over 46s)  kubelet          Node no-preload-886951 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    46s (x8 over 46s)  kubelet          Node no-preload-886951 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     46s (x8 over 46s)  kubelet          Node no-preload-886951 status is now: NodeHasSufficientPID
	  Normal   Starting                 35s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 35s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  34s                kubelet          Node no-preload-886951 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    34s                kubelet          Node no-preload-886951 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     34s                kubelet          Node no-preload-886951 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           31s                node-controller  Node no-preload-886951 event: Registered Node no-preload-886951 in Controller
	  Normal   NodeReady                14s                kubelet          Node no-preload-886951 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct18 09:08] overlayfs: idmapped layers are currently not supported
	[Oct18 09:10] overlayfs: idmapped layers are currently not supported
	[Oct18 09:11] overlayfs: idmapped layers are currently not supported
	[Oct18 09:12] overlayfs: idmapped layers are currently not supported
	[Oct18 09:13] overlayfs: idmapped layers are currently not supported
	[  +9.741593] overlayfs: idmapped layers are currently not supported
	[Oct18 09:14] overlayfs: idmapped layers are currently not supported
	[ +29.155522] overlayfs: idmapped layers are currently not supported
	[Oct18 09:15] overlayfs: idmapped layers are currently not supported
	[ +11.661984] kauditd_printk_skb: 8 callbacks suppressed
	[ +12.494912] overlayfs: idmapped layers are currently not supported
	[Oct18 09:16] overlayfs: idmapped layers are currently not supported
	[Oct18 09:18] overlayfs: idmapped layers are currently not supported
	[Oct18 09:20] overlayfs: idmapped layers are currently not supported
	[  +1.396393] overlayfs: idmapped layers are currently not supported
	[Oct18 09:22] overlayfs: idmapped layers are currently not supported
	[ +27.782946] overlayfs: idmapped layers are currently not supported
	[Oct18 09:25] overlayfs: idmapped layers are currently not supported
	[Oct18 09:26] overlayfs: idmapped layers are currently not supported
	[Oct18 09:27] overlayfs: idmapped layers are currently not supported
	[Oct18 09:28] overlayfs: idmapped layers are currently not supported
	[ +34.309418] overlayfs: idmapped layers are currently not supported
	[Oct18 09:30] overlayfs: idmapped layers are currently not supported
	[Oct18 09:31] overlayfs: idmapped layers are currently not supported
	[ +30.479187] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [21d07d52cc86f7d1c0e29d425ee865af7d3035e040810fdb6104746d60bf8cef] <==
	{"level":"warn","ts":"2025-10-18T09:31:40.767314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:40.815877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:40.857442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:40.914584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:40.966676Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:41.008139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:41.077678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:41.126933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:41.199435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:41.250648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:41.293591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:41.344647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:41.434281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:41.476661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:41.541548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:41.559745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:41.648434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:41.649150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:41.696310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:41.799646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:41.826608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:41.871307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:41.948113Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:42.178102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50904","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T09:31:53.120209Z","caller":"traceutil/trace.go:172","msg":"trace[1906804077] transaction","detail":"{read_only:false; response_revision:386; number_of_response:1; }","duration":"107.112291ms","start":"2025-10-18T09:31:53.013078Z","end":"2025-10-18T09:31:53.120190Z","steps":["trace[1906804077] 'process raft request'  (duration: 96.528898ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:32:22 up 11:14,  0 user,  load average: 4.58, 3.52, 2.71
	Linux no-preload-886951 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [873bd6491103ce78f4020c6b0ab5e1299f9576d09f05bb5e6467d9c0e45fa9e0] <==
	I1018 09:31:58.618498       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:31:58.618911       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1018 09:31:58.619112       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:31:58.619159       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:31:58.619196       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:31:58Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:31:58.813533       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:31:58.813639       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:31:58.908549       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:31:58.908862       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 09:31:59.016244       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:31:59.016266       1 metrics.go:72] Registering metrics
	I1018 09:31:59.016335       1 controller.go:711] "Syncing nftables rules"
	I1018 09:32:08.821184       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 09:32:08.821238       1 main.go:301] handling current node
	I1018 09:32:18.813647       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 09:32:18.813749       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0c1466055ee56f7eb45d5d7adfa3a519c61bd6886a59357ba5c9c09a8e8251b3] <==
	I1018 09:31:44.649223       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 09:31:44.649228       1 cache.go:39] Caches are synced for autoregister controller
	I1018 09:31:44.667682       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 09:31:44.722269       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:31:44.726800       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 09:31:44.784413       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:31:44.798035       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:31:44.798139       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 09:31:45.117390       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 09:31:45.200734       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 09:31:45.200772       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:31:46.545579       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:31:46.604656       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:31:46.666438       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	I1018 09:31:46.681626       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1018 09:31:46.683104       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1018 09:31:46.684278       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 09:31:46.691275       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 09:31:47.803820       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 09:31:47.831258       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 09:31:47.851316       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 09:31:52.490620       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:31:52.497697       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:31:52.582098       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 09:31:52.787926       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [c7e163158b41352f2a76cc42bd2732cee9d2b7593b3ced4ff8f4a02e47c821a9] <==
	I1018 09:31:51.706789       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-886951" podCIDRs=["10.244.0.0/24"]
	I1018 09:31:51.716512       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:31:51.721048       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 09:31:51.722271       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 09:31:51.722943       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 09:31:51.723062       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 09:31:51.723161       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-886951"
	I1018 09:31:51.723224       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1018 09:31:51.724043       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 09:31:51.725492       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 09:31:51.726085       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 09:31:51.728174       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 09:31:51.728428       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:31:51.728848       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 09:31:51.728887       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 09:31:51.728483       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 09:31:51.728503       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 09:31:51.735294       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:31:51.735817       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 09:31:51.747777       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 09:31:51.775310       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 09:31:51.777603       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 09:31:51.784845       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:31:51.791092       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 09:32:11.727218       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [f4b399b4834ab5d32ffa4bc569e119798c0d4de94ade3a9b2d73344634a2e995] <==
	I1018 09:31:53.816640       1 server_linux.go:53] "Using iptables proxy"
	I1018 09:31:53.939146       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 09:31:54.044093       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:31:54.044134       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1018 09:31:54.044218       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:31:54.157310       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:31:54.157365       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:31:54.162382       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:31:54.162759       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:31:54.162780       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:31:54.164923       1 config.go:200] "Starting service config controller"
	I1018 09:31:54.164934       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:31:54.164950       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:31:54.164953       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:31:54.164969       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:31:54.164974       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:31:54.167009       1 config.go:309] "Starting node config controller"
	I1018 09:31:54.167023       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:31:54.265937       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 09:31:54.265970       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 09:31:54.266020       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 09:31:54.267444       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [2951623eaf4e81199e4586a65bf8fc7deb47764b47c0c5d5691a8189d4737a99] <==
	I1018 09:31:45.560964       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:31:45.561001       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:31:45.561853       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 09:31:45.561919       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1018 09:31:45.588653       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 09:31:45.608516       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 09:31:45.608933       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 09:31:45.608998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 09:31:45.609046       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 09:31:45.609130       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 09:31:45.609169       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 09:31:45.609205       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 09:31:45.609238       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 09:31:45.609470       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 09:31:45.609536       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 09:31:45.609573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 09:31:45.609602       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 09:31:45.609639       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 09:31:45.609677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 09:31:45.609713       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 09:31:45.609744       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 09:31:45.609775       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 09:31:45.609838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 09:31:46.497903       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1018 09:31:49.162856       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:31:51 no-preload-886951 kubelet[2003]: I1018 09:31:51.755632    2003 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 18 09:31:51 no-preload-886951 kubelet[2003]: I1018 09:31:51.756181    2003 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 18 09:31:52 no-preload-886951 kubelet[2003]: I1018 09:31:52.903769    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/bd6c0a30-1f76-4b98-99f8-a0f3bdf4c4de-cni-cfg\") pod \"kindnet-l4xmh\" (UID: \"bd6c0a30-1f76-4b98-99f8-a0f3bdf4c4de\") " pod="kube-system/kindnet-l4xmh"
	Oct 18 09:31:52 no-preload-886951 kubelet[2003]: I1018 09:31:52.903818    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fcd20dac-c8b3-447a-9e13-0793b197fe69-xtables-lock\") pod \"kube-proxy-4gbs9\" (UID: \"fcd20dac-c8b3-447a-9e13-0793b197fe69\") " pod="kube-system/kube-proxy-4gbs9"
	Oct 18 09:31:52 no-preload-886951 kubelet[2003]: I1018 09:31:52.903856    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fcd20dac-c8b3-447a-9e13-0793b197fe69-lib-modules\") pod \"kube-proxy-4gbs9\" (UID: \"fcd20dac-c8b3-447a-9e13-0793b197fe69\") " pod="kube-system/kube-proxy-4gbs9"
	Oct 18 09:31:52 no-preload-886951 kubelet[2003]: I1018 09:31:52.903880    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd6c0a30-1f76-4b98-99f8-a0f3bdf4c4de-xtables-lock\") pod \"kindnet-l4xmh\" (UID: \"bd6c0a30-1f76-4b98-99f8-a0f3bdf4c4de\") " pod="kube-system/kindnet-l4xmh"
	Oct 18 09:31:52 no-preload-886951 kubelet[2003]: I1018 09:31:52.903898    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd6c0a30-1f76-4b98-99f8-a0f3bdf4c4de-lib-modules\") pod \"kindnet-l4xmh\" (UID: \"bd6c0a30-1f76-4b98-99f8-a0f3bdf4c4de\") " pod="kube-system/kindnet-l4xmh"
	Oct 18 09:31:52 no-preload-886951 kubelet[2003]: I1018 09:31:52.903915    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pswwx\" (UniqueName: \"kubernetes.io/projected/bd6c0a30-1f76-4b98-99f8-a0f3bdf4c4de-kube-api-access-pswwx\") pod \"kindnet-l4xmh\" (UID: \"bd6c0a30-1f76-4b98-99f8-a0f3bdf4c4de\") " pod="kube-system/kindnet-l4xmh"
	Oct 18 09:31:52 no-preload-886951 kubelet[2003]: I1018 09:31:52.903931    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fcd20dac-c8b3-447a-9e13-0793b197fe69-kube-proxy\") pod \"kube-proxy-4gbs9\" (UID: \"fcd20dac-c8b3-447a-9e13-0793b197fe69\") " pod="kube-system/kube-proxy-4gbs9"
	Oct 18 09:31:52 no-preload-886951 kubelet[2003]: I1018 09:31:52.903947    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6ssm\" (UniqueName: \"kubernetes.io/projected/fcd20dac-c8b3-447a-9e13-0793b197fe69-kube-api-access-v6ssm\") pod \"kube-proxy-4gbs9\" (UID: \"fcd20dac-c8b3-447a-9e13-0793b197fe69\") " pod="kube-system/kube-proxy-4gbs9"
	Oct 18 09:31:53 no-preload-886951 kubelet[2003]: I1018 09:31:53.173048    2003 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 18 09:31:53 no-preload-886951 kubelet[2003]: W1018 09:31:53.468073    2003 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/53265fd5269c62ddd549523e48a4341815211a2547f2f5f74191b59029ce1244/crio-41702904bdff951928a19a11f1969ef7d11a1abb641ff3421616457637012fc2 WatchSource:0}: Error finding container 41702904bdff951928a19a11f1969ef7d11a1abb641ff3421616457637012fc2: Status 404 returned error can't find the container with id 41702904bdff951928a19a11f1969ef7d11a1abb641ff3421616457637012fc2
	Oct 18 09:31:53 no-preload-886951 kubelet[2003]: W1018 09:31:53.526993    2003 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/53265fd5269c62ddd549523e48a4341815211a2547f2f5f74191b59029ce1244/crio-c4b6fa1ceb7a047377e45e30c7ca532d0e85d5fe8b0b7fa093566e1ec99d328a WatchSource:0}: Error finding container c4b6fa1ceb7a047377e45e30c7ca532d0e85d5fe8b0b7fa093566e1ec99d328a: Status 404 returned error can't find the container with id c4b6fa1ceb7a047377e45e30c7ca532d0e85d5fe8b0b7fa093566e1ec99d328a
	Oct 18 09:31:54 no-preload-886951 kubelet[2003]: I1018 09:31:54.127054    2003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4gbs9" podStartSLOduration=2.127036235 podStartE2EDuration="2.127036235s" podCreationTimestamp="2025-10-18 09:31:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:31:54.126995063 +0000 UTC m=+6.414918693" watchObservedRunningTime="2025-10-18 09:31:54.127036235 +0000 UTC m=+6.414959841"
	Oct 18 09:32:08 no-preload-886951 kubelet[2003]: I1018 09:32:08.897184    2003 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 18 09:32:09 no-preload-886951 kubelet[2003]: I1018 09:32:09.008338    2003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-l4xmh" podStartSLOduration=12.122813722 podStartE2EDuration="17.008316582s" podCreationTimestamp="2025-10-18 09:31:52 +0000 UTC" firstStartedPulling="2025-10-18 09:31:53.55827868 +0000 UTC m=+5.846202278" lastFinishedPulling="2025-10-18 09:31:58.44378154 +0000 UTC m=+10.731705138" observedRunningTime="2025-10-18 09:31:59.158606748 +0000 UTC m=+11.446530346" watchObservedRunningTime="2025-10-18 09:32:09.008316582 +0000 UTC m=+21.296240188"
	Oct 18 09:32:09 no-preload-886951 kubelet[2003]: I1018 09:32:09.114239    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zw6z\" (UniqueName: \"kubernetes.io/projected/16c0c41e-8ef0-457f-af9d-4f27c563f4a8-kube-api-access-6zw6z\") pod \"storage-provisioner\" (UID: \"16c0c41e-8ef0-457f-af9d-4f27c563f4a8\") " pod="kube-system/storage-provisioner"
	Oct 18 09:32:09 no-preload-886951 kubelet[2003]: I1018 09:32:09.114302    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d42892c4-28ad-40e9-bef7-edbaf3096efe-config-volume\") pod \"coredns-66bc5c9577-l2rmq\" (UID: \"d42892c4-28ad-40e9-bef7-edbaf3096efe\") " pod="kube-system/coredns-66bc5c9577-l2rmq"
	Oct 18 09:32:09 no-preload-886951 kubelet[2003]: I1018 09:32:09.114331    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz42s\" (UniqueName: \"kubernetes.io/projected/d42892c4-28ad-40e9-bef7-edbaf3096efe-kube-api-access-nz42s\") pod \"coredns-66bc5c9577-l2rmq\" (UID: \"d42892c4-28ad-40e9-bef7-edbaf3096efe\") " pod="kube-system/coredns-66bc5c9577-l2rmq"
	Oct 18 09:32:09 no-preload-886951 kubelet[2003]: I1018 09:32:09.114351    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/16c0c41e-8ef0-457f-af9d-4f27c563f4a8-tmp\") pod \"storage-provisioner\" (UID: \"16c0c41e-8ef0-457f-af9d-4f27c563f4a8\") " pod="kube-system/storage-provisioner"
	Oct 18 09:32:09 no-preload-886951 kubelet[2003]: W1018 09:32:09.328236    2003 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/53265fd5269c62ddd549523e48a4341815211a2547f2f5f74191b59029ce1244/crio-fdb987e80af8c9fee476cb154855150c6f6ea16602dc5c16fddaae1b3cf2defe WatchSource:0}: Error finding container fdb987e80af8c9fee476cb154855150c6f6ea16602dc5c16fddaae1b3cf2defe: Status 404 returned error can't find the container with id fdb987e80af8c9fee476cb154855150c6f6ea16602dc5c16fddaae1b3cf2defe
	Oct 18 09:32:09 no-preload-886951 kubelet[2003]: W1018 09:32:09.363066    2003 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/53265fd5269c62ddd549523e48a4341815211a2547f2f5f74191b59029ce1244/crio-6ba81e76f8dbd813f462e91d62745d1b2d719fb47502e98630aeef4434051078 WatchSource:0}: Error finding container 6ba81e76f8dbd813f462e91d62745d1b2d719fb47502e98630aeef4434051078: Status 404 returned error can't find the container with id 6ba81e76f8dbd813f462e91d62745d1b2d719fb47502e98630aeef4434051078
	Oct 18 09:32:10 no-preload-886951 kubelet[2003]: I1018 09:32:10.191679    2003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.191659671 podStartE2EDuration="15.191659671s" podCreationTimestamp="2025-10-18 09:31:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:32:10.191507618 +0000 UTC m=+22.479431232" watchObservedRunningTime="2025-10-18 09:32:10.191659671 +0000 UTC m=+22.479583277"
	Oct 18 09:32:10 no-preload-886951 kubelet[2003]: I1018 09:32:10.206449    2003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-l2rmq" podStartSLOduration=18.206433102 podStartE2EDuration="18.206433102s" podCreationTimestamp="2025-10-18 09:31:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:32:10.206064455 +0000 UTC m=+22.493988053" watchObservedRunningTime="2025-10-18 09:32:10.206433102 +0000 UTC m=+22.494356708"
	Oct 18 09:32:12 no-preload-886951 kubelet[2003]: I1018 09:32:12.437918    2003 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhdbk\" (UniqueName: \"kubernetes.io/projected/fb073a56-3a60-4f54-b138-9cee2302d24a-kube-api-access-jhdbk\") pod \"busybox\" (UID: \"fb073a56-3a60-4f54-b138-9cee2302d24a\") " pod="default/busybox"
	
	
	==> storage-provisioner [bc6b0dc2b829482f6733c023700b430955c251fed19ed736cbebb5419bf0d6d3] <==
	I1018 09:32:09.405895       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 09:32:09.419127       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 09:32:09.419188       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 09:32:09.424508       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:32:09.430917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:32:09.431878       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 09:32:09.432074       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-886951_cab32c2a-1bd4-4a96-9fd6-d8ec83117660!
	I1018 09:32:09.432993       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a9182a57-e5d4-477c-a0cb-d3046b198831", APIVersion:"v1", ResourceVersion:"453", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-886951_cab32c2a-1bd4-4a96-9fd6-d8ec83117660 became leader
	W1018 09:32:09.458646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:32:09.463766       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:32:09.533069       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-886951_cab32c2a-1bd4-4a96-9fd6-d8ec83117660!
	W1018 09:32:11.466626       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:32:11.473236       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:32:13.477202       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:32:13.484938       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:32:15.487974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:32:15.494454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:32:17.497281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:32:17.502136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:32:19.505772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:32:19.510198       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:32:21.513307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:32:21.519604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-886951 -n no-preload-886951
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-886951 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-559379 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-559379 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (371.082111ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:32:48Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-559379 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-559379 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-559379 describe deploy/metrics-server -n kube-system: exit status 1 (147.807888ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-559379 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-559379
helpers_test.go:243: (dbg) docker inspect embed-certs-559379:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "28d5892e22acbb4aee9ab8966787a91744522fa04b863c7570f04701f5c19fa0",
	        "Created": "2025-10-18T09:31:14.969231495Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1464174,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:31:15.12510846Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/28d5892e22acbb4aee9ab8966787a91744522fa04b863c7570f04701f5c19fa0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/28d5892e22acbb4aee9ab8966787a91744522fa04b863c7570f04701f5c19fa0/hostname",
	        "HostsPath": "/var/lib/docker/containers/28d5892e22acbb4aee9ab8966787a91744522fa04b863c7570f04701f5c19fa0/hosts",
	        "LogPath": "/var/lib/docker/containers/28d5892e22acbb4aee9ab8966787a91744522fa04b863c7570f04701f5c19fa0/28d5892e22acbb4aee9ab8966787a91744522fa04b863c7570f04701f5c19fa0-json.log",
	        "Name": "/embed-certs-559379",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-559379:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-559379",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "28d5892e22acbb4aee9ab8966787a91744522fa04b863c7570f04701f5c19fa0",
	                "LowerDir": "/var/lib/docker/overlay2/0e2f9d97ea0ae68fd5ebc6259193de39e6a86f87d5364288929a3a18dfb61772-init/diff:/var/lib/docker/overlay2/60519750c2737db0dfdb37adf468f6414129c2b5dcb7218ea59afbc63bfd537f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0e2f9d97ea0ae68fd5ebc6259193de39e6a86f87d5364288929a3a18dfb61772/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0e2f9d97ea0ae68fd5ebc6259193de39e6a86f87d5364288929a3a18dfb61772/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0e2f9d97ea0ae68fd5ebc6259193de39e6a86f87d5364288929a3a18dfb61772/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-559379",
	                "Source": "/var/lib/docker/volumes/embed-certs-559379/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-559379",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-559379",
	                "name.minikube.sigs.k8s.io": "embed-certs-559379",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "05b587720738af5819b1166c23f05bc885de794a7895f1dd91cc05c17c990965",
	            "SandboxKey": "/var/run/docker/netns/05b587720738",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34886"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34887"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34890"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34888"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34889"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-559379": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:37:37:80:d7:e1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c6157e554c859f57a7166278cd1d0343828367a13a26ff7877c8ce4c80e272af",
	                    "EndpointID": "71340814ff673e228c07cdc95d0b3e253d70a7198a6bde513acbbe78739ff215",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-559379",
	                        "28d5892e22ac"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-559379 -n embed-certs-559379
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-559379 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-559379 logs -n 25: (1.47805186s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ start   │ -p kubernetes-upgrade-757858 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-757858 │ jenkins │ v1.37.0 │ 18 Oct 25 09:27 UTC │                     │
	│ start   │ -p kubernetes-upgrade-757858 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-757858 │ jenkins │ v1.37.0 │ 18 Oct 25 09:27 UTC │ 18 Oct 25 09:27 UTC │
	│ delete  │ -p kubernetes-upgrade-757858                                                                                                                                                                                                                  │ kubernetes-upgrade-757858 │ jenkins │ v1.37.0 │ 18 Oct 25 09:27 UTC │ 18 Oct 25 09:27 UTC │
	│ start   │ -p cert-options-783705 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-783705       │ jenkins │ v1.37.0 │ 18 Oct 25 09:27 UTC │ 18 Oct 25 09:28 UTC │
	│ ssh     │ cert-options-783705 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-783705       │ jenkins │ v1.37.0 │ 18 Oct 25 09:28 UTC │ 18 Oct 25 09:28 UTC │
	│ ssh     │ -p cert-options-783705 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-783705       │ jenkins │ v1.37.0 │ 18 Oct 25 09:28 UTC │ 18 Oct 25 09:28 UTC │
	│ delete  │ -p cert-options-783705                                                                                                                                                                                                                        │ cert-options-783705       │ jenkins │ v1.37.0 │ 18 Oct 25 09:28 UTC │ 18 Oct 25 09:28 UTC │
	│ start   │ -p old-k8s-version-136598 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-136598    │ jenkins │ v1.37.0 │ 18 Oct 25 09:28 UTC │ 18 Oct 25 09:29 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-136598 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-136598    │ jenkins │ v1.37.0 │ 18 Oct 25 09:29 UTC │                     │
	│ stop    │ -p old-k8s-version-136598 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-136598    │ jenkins │ v1.37.0 │ 18 Oct 25 09:29 UTC │ 18 Oct 25 09:29 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-136598 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-136598    │ jenkins │ v1.37.0 │ 18 Oct 25 09:29 UTC │ 18 Oct 25 09:29 UTC │
	│ start   │ -p old-k8s-version-136598 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-136598    │ jenkins │ v1.37.0 │ 18 Oct 25 09:29 UTC │ 18 Oct 25 09:30 UTC │
	│ start   │ -p cert-expiration-854768 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-854768    │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │ 18 Oct 25 09:30 UTC │
	│ delete  │ -p cert-expiration-854768                                                                                                                                                                                                                     │ cert-expiration-854768    │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │ 18 Oct 25 09:30 UTC │
	│ image   │ old-k8s-version-136598 image list --format=json                                                                                                                                                                                               │ old-k8s-version-136598    │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │ 18 Oct 25 09:30 UTC │
	│ pause   │ -p old-k8s-version-136598 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-136598    │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │                     │
	│ start   │ -p no-preload-886951 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-886951         │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │ 18 Oct 25 09:32 UTC │
	│ delete  │ -p old-k8s-version-136598                                                                                                                                                                                                                     │ old-k8s-version-136598    │ jenkins │ v1.37.0 │ 18 Oct 25 09:31 UTC │ 18 Oct 25 09:31 UTC │
	│ delete  │ -p old-k8s-version-136598                                                                                                                                                                                                                     │ old-k8s-version-136598    │ jenkins │ v1.37.0 │ 18 Oct 25 09:31 UTC │ 18 Oct 25 09:31 UTC │
	│ start   │ -p embed-certs-559379 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-559379        │ jenkins │ v1.37.0 │ 18 Oct 25 09:31 UTC │ 18 Oct 25 09:32 UTC │
	│ addons  │ enable metrics-server -p no-preload-886951 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-886951         │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │                     │
	│ stop    │ -p no-preload-886951 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-886951         │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │ 18 Oct 25 09:32 UTC │
	│ addons  │ enable dashboard -p no-preload-886951 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-886951         │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │ 18 Oct 25 09:32 UTC │
	│ start   │ -p no-preload-886951 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-886951         │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-559379 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-559379        │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:32:35
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:32:35.345178 1468145 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:32:35.345350 1468145 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:32:35.345385 1468145 out.go:374] Setting ErrFile to fd 2...
	I1018 09:32:35.345397 1468145 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:32:35.345693 1468145 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 09:32:35.346119 1468145 out.go:368] Setting JSON to false
	I1018 09:32:35.347120 1468145 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":40503,"bootTime":1760739453,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1018 09:32:35.347185 1468145 start.go:141] virtualization:  
	I1018 09:32:35.350327 1468145 out.go:179] * [no-preload-886951] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 09:32:35.354176 1468145 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 09:32:35.354215 1468145 notify.go:220] Checking for updates...
	I1018 09:32:35.360201 1468145 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:32:35.363025 1468145 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 09:32:35.365864 1468145 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-1274243/.minikube
	I1018 09:32:35.368653 1468145 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 09:32:35.371504 1468145 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:32:35.374803 1468145 config.go:182] Loaded profile config "no-preload-886951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:32:35.375395 1468145 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:32:35.408069 1468145 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 09:32:35.408193 1468145 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:32:35.464690 1468145 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 09:32:35.454949932 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:32:35.464805 1468145 docker.go:318] overlay module found
	I1018 09:32:35.468146 1468145 out.go:179] * Using the docker driver based on existing profile
	I1018 09:32:35.471037 1468145 start.go:305] selected driver: docker
	I1018 09:32:35.471057 1468145 start.go:925] validating driver "docker" against &{Name:no-preload-886951 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-886951 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:32:35.471162 1468145 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:32:35.471944 1468145 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:32:35.528200 1468145 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 09:32:35.518797335 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:32:35.528528 1468145 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:32:35.528564 1468145 cni.go:84] Creating CNI manager for ""
	I1018 09:32:35.528625 1468145 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:32:35.528670 1468145 start.go:349] cluster config:
	{Name:no-preload-886951 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-886951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:32:35.531767 1468145 out.go:179] * Starting "no-preload-886951" primary control-plane node in "no-preload-886951" cluster
	I1018 09:32:35.534619 1468145 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:32:35.537471 1468145 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:32:35.540337 1468145 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:32:35.540400 1468145 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:32:35.540487 1468145 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/config.json ...
	I1018 09:32:35.540776 1468145 cache.go:107] acquiring lock: {Name:mkbebba4bc705d659ee66bc0af56d117598bf518 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:32:35.540865 1468145 cache.go:115] /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1018 09:32:35.540879 1468145 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 117.962µs
	I1018 09:32:35.540893 1468145 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1018 09:32:35.540907 1468145 cache.go:107] acquiring lock: {Name:mk55ca2130ad8720b5d4e30a3e3aca89f3adaf85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:32:35.540950 1468145 cache.go:115] /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1018 09:32:35.540959 1468145 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 53.619µs
	I1018 09:32:35.540978 1468145 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1018 09:32:35.541030 1468145 cache.go:107] acquiring lock: {Name:mk181c56341c6ab3c8b820245c38e1f457dfcfbe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:32:35.541072 1468145 cache.go:115] /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1018 09:32:35.541083 1468145 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 55.424µs
	I1018 09:32:35.541089 1468145 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1018 09:32:35.541100 1468145 cache.go:107] acquiring lock: {Name:mk23edb8e930744ec07884b432879c4ea00b2405 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:32:35.541129 1468145 cache.go:115] /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1018 09:32:35.541134 1468145 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 36.273µs
	I1018 09:32:35.541143 1468145 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1018 09:32:35.541152 1468145 cache.go:107] acquiring lock: {Name:mk8d3760b83fd8a7218910885f73a4559e163755 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:32:35.541182 1468145 cache.go:115] /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1018 09:32:35.541192 1468145 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 41.099µs
	I1018 09:32:35.541199 1468145 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1018 09:32:35.541208 1468145 cache.go:107] acquiring lock: {Name:mkccda2c66e79badbf58f1b3c791a60ea2d0dd4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:32:35.541236 1468145 cache.go:115] /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1018 09:32:35.541248 1468145 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 40.679µs
	I1018 09:32:35.541254 1468145 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1018 09:32:35.541262 1468145 cache.go:107] acquiring lock: {Name:mk3f05ac3a6df0aaf5c01de1c3278a44e71a1ede Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:32:35.541292 1468145 cache.go:115] /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1018 09:32:35.541300 1468145 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 38.801µs
	I1018 09:32:35.541306 1468145 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1018 09:32:35.541466 1468145 cache.go:107] acquiring lock: {Name:mkaa43f9374ace13fbeea7697fbebfe03a59b228 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:32:35.541514 1468145 cache.go:115] /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1018 09:32:35.541525 1468145 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 63.933µs
	I1018 09:32:35.541532 1468145 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1018 09:32:35.541539 1468145 cache.go:87] Successfully saved all images to host disk.
	I1018 09:32:35.561131 1468145 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:32:35.561157 1468145 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:32:35.561175 1468145 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:32:35.561203 1468145 start.go:360] acquireMachinesLock for no-preload-886951: {Name:mk1b35ce5d45058835b57539f98f93aa21da27b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:32:35.561261 1468145 start.go:364] duration metric: took 38.621µs to acquireMachinesLock for "no-preload-886951"
	I1018 09:32:35.561285 1468145 start.go:96] Skipping create...Using existing machine configuration
	I1018 09:32:35.561299 1468145 fix.go:54] fixHost starting: 
	I1018 09:32:35.561553 1468145 cli_runner.go:164] Run: docker container inspect no-preload-886951 --format={{.State.Status}}
	I1018 09:32:35.582650 1468145 fix.go:112] recreateIfNeeded on no-preload-886951: state=Stopped err=<nil>
	W1018 09:32:35.582691 1468145 fix.go:138] unexpected machine state, will restart: <nil>
	W1018 09:32:33.660275 1463591 node_ready.go:57] node "embed-certs-559379" has "Ready":"False" status (will retry)
	W1018 09:32:36.156881 1463591 node_ready.go:57] node "embed-certs-559379" has "Ready":"False" status (will retry)
	I1018 09:32:36.657671 1463591 node_ready.go:49] node "embed-certs-559379" is "Ready"
	I1018 09:32:36.657698 1463591 node_ready.go:38] duration metric: took 40.004921077s for node "embed-certs-559379" to be "Ready" ...
	I1018 09:32:36.657711 1463591 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:32:36.657768 1463591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:32:36.678084 1463591 api_server.go:72] duration metric: took 42.00007304s to wait for apiserver process to appear ...
	I1018 09:32:36.678105 1463591 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:32:36.678123 1463591 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:32:36.686707 1463591 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 09:32:36.687727 1463591 api_server.go:141] control plane version: v1.34.1
	I1018 09:32:36.687748 1463591 api_server.go:131] duration metric: took 9.635842ms to wait for apiserver health ...
	I1018 09:32:36.687757 1463591 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:32:36.691172 1463591 system_pods.go:59] 8 kube-system pods found
	I1018 09:32:36.691205 1463591 system_pods.go:61] "coredns-66bc5c9577-t9blq" [07dead7a-c196-4355-8e63-d7dbe47b07cc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:32:36.691212 1463591 system_pods.go:61] "etcd-embed-certs-559379" [473c810d-3278-481b-ad96-7f200a82f830] Running
	I1018 09:32:36.691218 1463591 system_pods.go:61] "kindnet-6ltrq" [ca80e038-38ba-42a6-8275-fcc38916c7ca] Running
	I1018 09:32:36.691223 1463591 system_pods.go:61] "kube-apiserver-embed-certs-559379" [ed153ff3-f3bf-44ba-ad22-b935d59b6c38] Running
	I1018 09:32:36.691227 1463591 system_pods.go:61] "kube-controller-manager-embed-certs-559379" [dadcca5c-657c-42e4-865c-cc21d7af7fbc] Running
	I1018 09:32:36.691231 1463591 system_pods.go:61] "kube-proxy-82pzn" [4d204191-f23a-4031-a37d-a4c1ec529e4c] Running
	I1018 09:32:36.691235 1463591 system_pods.go:61] "kube-scheduler-embed-certs-559379" [0bc4c8ce-35cf-41a6-a6fa-a1834adb12a4] Running
	I1018 09:32:36.691241 1463591 system_pods.go:61] "storage-provisioner" [0e85b72f-adef-4429-bf1f-1f003538e5bb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:32:36.691252 1463591 system_pods.go:74] duration metric: took 3.489183ms to wait for pod list to return data ...
	I1018 09:32:36.691260 1463591 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:32:36.694089 1463591 default_sa.go:45] found service account: "default"
	I1018 09:32:36.694112 1463591 default_sa.go:55] duration metric: took 2.846083ms for default service account to be created ...
	I1018 09:32:36.694145 1463591 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 09:32:36.791611 1463591 system_pods.go:86] 8 kube-system pods found
	I1018 09:32:36.791651 1463591 system_pods.go:89] "coredns-66bc5c9577-t9blq" [07dead7a-c196-4355-8e63-d7dbe47b07cc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:32:36.791658 1463591 system_pods.go:89] "etcd-embed-certs-559379" [473c810d-3278-481b-ad96-7f200a82f830] Running
	I1018 09:32:36.791665 1463591 system_pods.go:89] "kindnet-6ltrq" [ca80e038-38ba-42a6-8275-fcc38916c7ca] Running
	I1018 09:32:36.791669 1463591 system_pods.go:89] "kube-apiserver-embed-certs-559379" [ed153ff3-f3bf-44ba-ad22-b935d59b6c38] Running
	I1018 09:32:36.791675 1463591 system_pods.go:89] "kube-controller-manager-embed-certs-559379" [dadcca5c-657c-42e4-865c-cc21d7af7fbc] Running
	I1018 09:32:36.791679 1463591 system_pods.go:89] "kube-proxy-82pzn" [4d204191-f23a-4031-a37d-a4c1ec529e4c] Running
	I1018 09:32:36.791684 1463591 system_pods.go:89] "kube-scheduler-embed-certs-559379" [0bc4c8ce-35cf-41a6-a6fa-a1834adb12a4] Running
	I1018 09:32:36.791691 1463591 system_pods.go:89] "storage-provisioner" [0e85b72f-adef-4429-bf1f-1f003538e5bb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:32:36.791722 1463591 retry.go:31] will retry after 266.839469ms: missing components: kube-dns
	I1018 09:32:37.063845 1463591 system_pods.go:86] 8 kube-system pods found
	I1018 09:32:37.063878 1463591 system_pods.go:89] "coredns-66bc5c9577-t9blq" [07dead7a-c196-4355-8e63-d7dbe47b07cc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:32:37.063885 1463591 system_pods.go:89] "etcd-embed-certs-559379" [473c810d-3278-481b-ad96-7f200a82f830] Running
	I1018 09:32:37.063892 1463591 system_pods.go:89] "kindnet-6ltrq" [ca80e038-38ba-42a6-8275-fcc38916c7ca] Running
	I1018 09:32:37.063897 1463591 system_pods.go:89] "kube-apiserver-embed-certs-559379" [ed153ff3-f3bf-44ba-ad22-b935d59b6c38] Running
	I1018 09:32:37.063902 1463591 system_pods.go:89] "kube-controller-manager-embed-certs-559379" [dadcca5c-657c-42e4-865c-cc21d7af7fbc] Running
	I1018 09:32:37.063914 1463591 system_pods.go:89] "kube-proxy-82pzn" [4d204191-f23a-4031-a37d-a4c1ec529e4c] Running
	I1018 09:32:37.063923 1463591 system_pods.go:89] "kube-scheduler-embed-certs-559379" [0bc4c8ce-35cf-41a6-a6fa-a1834adb12a4] Running
	I1018 09:32:37.063930 1463591 system_pods.go:89] "storage-provisioner" [0e85b72f-adef-4429-bf1f-1f003538e5bb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:32:37.063947 1463591 retry.go:31] will retry after 383.317774ms: missing components: kube-dns
	I1018 09:32:37.451283 1463591 system_pods.go:86] 8 kube-system pods found
	I1018 09:32:37.451314 1463591 system_pods.go:89] "coredns-66bc5c9577-t9blq" [07dead7a-c196-4355-8e63-d7dbe47b07cc] Running
	I1018 09:32:37.451322 1463591 system_pods.go:89] "etcd-embed-certs-559379" [473c810d-3278-481b-ad96-7f200a82f830] Running
	I1018 09:32:37.451326 1463591 system_pods.go:89] "kindnet-6ltrq" [ca80e038-38ba-42a6-8275-fcc38916c7ca] Running
	I1018 09:32:37.451331 1463591 system_pods.go:89] "kube-apiserver-embed-certs-559379" [ed153ff3-f3bf-44ba-ad22-b935d59b6c38] Running
	I1018 09:32:37.451336 1463591 system_pods.go:89] "kube-controller-manager-embed-certs-559379" [dadcca5c-657c-42e4-865c-cc21d7af7fbc] Running
	I1018 09:32:37.451340 1463591 system_pods.go:89] "kube-proxy-82pzn" [4d204191-f23a-4031-a37d-a4c1ec529e4c] Running
	I1018 09:32:37.451344 1463591 system_pods.go:89] "kube-scheduler-embed-certs-559379" [0bc4c8ce-35cf-41a6-a6fa-a1834adb12a4] Running
	I1018 09:32:37.451348 1463591 system_pods.go:89] "storage-provisioner" [0e85b72f-adef-4429-bf1f-1f003538e5bb] Running
	I1018 09:32:37.451355 1463591 system_pods.go:126] duration metric: took 757.198833ms to wait for k8s-apps to be running ...
	I1018 09:32:37.451368 1463591 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 09:32:37.451427 1463591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:32:37.464510 1463591 system_svc.go:56] duration metric: took 13.131671ms WaitForService to wait for kubelet
	I1018 09:32:37.464536 1463591 kubeadm.go:586] duration metric: took 42.786530365s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:32:37.464555 1463591 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:32:37.467384 1463591 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 09:32:37.467462 1463591 node_conditions.go:123] node cpu capacity is 2
	I1018 09:32:37.467491 1463591 node_conditions.go:105] duration metric: took 2.930126ms to run NodePressure ...
	I1018 09:32:37.467511 1463591 start.go:241] waiting for startup goroutines ...
	I1018 09:32:37.467520 1463591 start.go:246] waiting for cluster config update ...
	I1018 09:32:37.467533 1463591 start.go:255] writing updated cluster config ...
	I1018 09:32:37.467900 1463591 ssh_runner.go:195] Run: rm -f paused
	I1018 09:32:37.471696 1463591 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:32:37.475518 1463591 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-t9blq" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:32:37.480824 1463591 pod_ready.go:94] pod "coredns-66bc5c9577-t9blq" is "Ready"
	I1018 09:32:37.480851 1463591 pod_ready.go:86] duration metric: took 5.303893ms for pod "coredns-66bc5c9577-t9blq" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:32:37.483077 1463591 pod_ready.go:83] waiting for pod "etcd-embed-certs-559379" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:32:37.487140 1463591 pod_ready.go:94] pod "etcd-embed-certs-559379" is "Ready"
	I1018 09:32:37.487167 1463591 pod_ready.go:86] duration metric: took 4.060408ms for pod "etcd-embed-certs-559379" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:32:37.489371 1463591 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-559379" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:32:37.493682 1463591 pod_ready.go:94] pod "kube-apiserver-embed-certs-559379" is "Ready"
	I1018 09:32:37.493709 1463591 pod_ready.go:86] duration metric: took 4.3132ms for pod "kube-apiserver-embed-certs-559379" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:32:37.495651 1463591 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-559379" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:32:37.875807 1463591 pod_ready.go:94] pod "kube-controller-manager-embed-certs-559379" is "Ready"
	I1018 09:32:37.875834 1463591 pod_ready.go:86] duration metric: took 380.162885ms for pod "kube-controller-manager-embed-certs-559379" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:32:38.081119 1463591 pod_ready.go:83] waiting for pod "kube-proxy-82pzn" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:32:38.476338 1463591 pod_ready.go:94] pod "kube-proxy-82pzn" is "Ready"
	I1018 09:32:38.476415 1463591 pod_ready.go:86] duration metric: took 395.266111ms for pod "kube-proxy-82pzn" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:32:38.676452 1463591 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-559379" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:32:39.078244 1463591 pod_ready.go:94] pod "kube-scheduler-embed-certs-559379" is "Ready"
	I1018 09:32:39.078277 1463591 pod_ready.go:86] duration metric: took 401.797614ms for pod "kube-scheduler-embed-certs-559379" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:32:39.078291 1463591 pod_ready.go:40] duration metric: took 1.606561532s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:32:39.171620 1463591 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 09:32:39.174598 1463591 out.go:179] * Done! kubectl is now configured to use "embed-certs-559379" cluster and "default" namespace by default
	I1018 09:32:35.587678 1468145 out.go:252] * Restarting existing docker container for "no-preload-886951" ...
	I1018 09:32:35.587764 1468145 cli_runner.go:164] Run: docker start no-preload-886951
	I1018 09:32:35.847954 1468145 cli_runner.go:164] Run: docker container inspect no-preload-886951 --format={{.State.Status}}
	I1018 09:32:35.871135 1468145 kic.go:430] container "no-preload-886951" state is running.
	I1018 09:32:35.871517 1468145 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-886951
	I1018 09:32:35.893865 1468145 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/config.json ...
	I1018 09:32:35.894151 1468145 machine.go:93] provisionDockerMachine start ...
	I1018 09:32:35.894225 1468145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-886951
	I1018 09:32:35.915194 1468145 main.go:141] libmachine: Using SSH client type: native
	I1018 09:32:35.915510 1468145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34891 <nil> <nil>}
	I1018 09:32:35.915525 1468145 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:32:35.916281 1468145 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44250->127.0.0.1:34891: read: connection reset by peer
	I1018 09:32:39.071641 1468145 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-886951
	
	I1018 09:32:39.071715 1468145 ubuntu.go:182] provisioning hostname "no-preload-886951"
	I1018 09:32:39.071796 1468145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-886951
	I1018 09:32:39.092751 1468145 main.go:141] libmachine: Using SSH client type: native
	I1018 09:32:39.093079 1468145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34891 <nil> <nil>}
	I1018 09:32:39.093097 1468145 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-886951 && echo "no-preload-886951" | sudo tee /etc/hostname
	I1018 09:32:39.288513 1468145 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-886951
	
	I1018 09:32:39.288594 1468145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-886951
	I1018 09:32:39.311381 1468145 main.go:141] libmachine: Using SSH client type: native
	I1018 09:32:39.311747 1468145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34891 <nil> <nil>}
	I1018 09:32:39.311767 1468145 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-886951' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-886951/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-886951' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:32:39.471995 1468145 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:32:39.472025 1468145 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-1274243/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-1274243/.minikube}
	I1018 09:32:39.472047 1468145 ubuntu.go:190] setting up certificates
	I1018 09:32:39.472057 1468145 provision.go:84] configureAuth start
	I1018 09:32:39.472118 1468145 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-886951
	I1018 09:32:39.490756 1468145 provision.go:143] copyHostCerts
	I1018 09:32:39.490825 1468145 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem, removing ...
	I1018 09:32:39.490848 1468145 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem
	I1018 09:32:39.490932 1468145 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem (1123 bytes)
	I1018 09:32:39.491045 1468145 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem, removing ...
	I1018 09:32:39.491057 1468145 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem
	I1018 09:32:39.491086 1468145 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem (1675 bytes)
	I1018 09:32:39.491165 1468145 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem, removing ...
	I1018 09:32:39.491175 1468145 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem
	I1018 09:32:39.491201 1468145 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem (1078 bytes)
	I1018 09:32:39.491254 1468145 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem org=jenkins.no-preload-886951 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-886951]
	I1018 09:32:39.610546 1468145 provision.go:177] copyRemoteCerts
	I1018 09:32:39.610647 1468145 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:32:39.610711 1468145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-886951
	I1018 09:32:39.628170 1468145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34891 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/no-preload-886951/id_rsa Username:docker}
	I1018 09:32:39.736014 1468145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 09:32:39.761982 1468145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 09:32:39.781183 1468145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:32:39.802127 1468145 provision.go:87] duration metric: took 330.046127ms to configureAuth
	I1018 09:32:39.802156 1468145 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:32:39.802387 1468145 config.go:182] Loaded profile config "no-preload-886951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:32:39.802537 1468145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-886951
	I1018 09:32:39.821825 1468145 main.go:141] libmachine: Using SSH client type: native
	I1018 09:32:39.822127 1468145 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34891 <nil> <nil>}
	I1018 09:32:39.822153 1468145 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:32:40.209456 1468145 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:32:40.209477 1468145 machine.go:96] duration metric: took 4.315307249s to provisionDockerMachine
	I1018 09:32:40.209492 1468145 start.go:293] postStartSetup for "no-preload-886951" (driver="docker")
	I1018 09:32:40.209506 1468145 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:32:40.209592 1468145 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:32:40.209644 1468145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-886951
	I1018 09:32:40.230680 1468145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34891 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/no-preload-886951/id_rsa Username:docker}
	I1018 09:32:40.335573 1468145 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:32:40.338783 1468145 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:32:40.338811 1468145 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:32:40.338822 1468145 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-1274243/.minikube/addons for local assets ...
	I1018 09:32:40.338876 1468145 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-1274243/.minikube/files for local assets ...
	I1018 09:32:40.338958 1468145 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem -> 12760972.pem in /etc/ssl/certs
	I1018 09:32:40.339059 1468145 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:32:40.347113 1468145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem --> /etc/ssl/certs/12760972.pem (1708 bytes)
	I1018 09:32:40.364129 1468145 start.go:296] duration metric: took 154.619382ms for postStartSetup
	I1018 09:32:40.364209 1468145 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:32:40.364248 1468145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-886951
	I1018 09:32:40.381113 1468145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34891 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/no-preload-886951/id_rsa Username:docker}
	I1018 09:32:40.481222 1468145 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:32:40.485760 1468145 fix.go:56] duration metric: took 4.924462516s for fixHost
	I1018 09:32:40.485785 1468145 start.go:83] releasing machines lock for "no-preload-886951", held for 4.924512361s
	I1018 09:32:40.485873 1468145 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-886951
	I1018 09:32:40.502347 1468145 ssh_runner.go:195] Run: cat /version.json
	I1018 09:32:40.502415 1468145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-886951
	I1018 09:32:40.502679 1468145 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:32:40.502737 1468145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-886951
	I1018 09:32:40.519755 1468145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34891 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/no-preload-886951/id_rsa Username:docker}
	I1018 09:32:40.533520 1468145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34891 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/no-preload-886951/id_rsa Username:docker}
	I1018 09:32:40.627414 1468145 ssh_runner.go:195] Run: systemctl --version
	I1018 09:32:40.728737 1468145 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:32:40.765325 1468145 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:32:40.769722 1468145 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:32:40.769789 1468145 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:32:40.777870 1468145 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 09:32:40.777892 1468145 start.go:495] detecting cgroup driver to use...
	I1018 09:32:40.777923 1468145 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 09:32:40.777970 1468145 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:32:40.792744 1468145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:32:40.805865 1468145 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:32:40.805973 1468145 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:32:40.821945 1468145 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:32:40.835243 1468145 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:32:40.954519 1468145 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:32:41.082094 1468145 docker.go:234] disabling docker service ...
	I1018 09:32:41.082159 1468145 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:32:41.096857 1468145 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:32:41.111393 1468145 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:32:41.236472 1468145 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:32:41.359807 1468145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:32:41.373072 1468145 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:32:41.387498 1468145 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:32:41.387585 1468145 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:32:41.396709 1468145 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 09:32:41.396830 1468145 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:32:41.406441 1468145 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:32:41.419241 1468145 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:32:41.428992 1468145 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:32:41.436879 1468145 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:32:41.446395 1468145 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:32:41.457292 1468145 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:32:41.470524 1468145 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:32:41.482638 1468145 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:32:41.491377 1468145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:32:41.640494 1468145 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:32:41.796088 1468145 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:32:41.796216 1468145 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:32:41.800859 1468145 start.go:563] Will wait 60s for crictl version
	I1018 09:32:41.800975 1468145 ssh_runner.go:195] Run: which crictl
	I1018 09:32:41.804879 1468145 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:32:41.861637 1468145 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:32:41.861803 1468145 ssh_runner.go:195] Run: crio --version
	I1018 09:32:41.894730 1468145 ssh_runner.go:195] Run: crio --version
	I1018 09:32:41.933182 1468145 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:32:41.936191 1468145 cli_runner.go:164] Run: docker network inspect no-preload-886951 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:32:41.952726 1468145 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 09:32:41.956774 1468145 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:32:41.967340 1468145 kubeadm.go:883] updating cluster {Name:no-preload-886951 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-886951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:32:41.967469 1468145 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:32:41.967512 1468145 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:32:42.002441 1468145 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:32:42.002467 1468145 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:32:42.002475 1468145 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1018 09:32:42.002596 1468145 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-886951 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-886951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:32:42.002684 1468145 ssh_runner.go:195] Run: crio config
	I1018 09:32:42.098009 1468145 cni.go:84] Creating CNI manager for ""
	I1018 09:32:42.098052 1468145 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:32:42.098082 1468145 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:32:42.098121 1468145 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-886951 NodeName:no-preload-886951 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:32:42.098360 1468145 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-886951"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:32:42.098451 1468145 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:32:42.115040 1468145 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:32:42.115210 1468145 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:32:42.125954 1468145 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1018 09:32:42.154262 1468145 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:32:42.184484 1468145 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1018 09:32:42.205224 1468145 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:32:42.211018 1468145 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:32:42.224847 1468145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:32:42.373497 1468145 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:32:42.396327 1468145 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951 for IP: 192.168.85.2
	I1018 09:32:42.396390 1468145 certs.go:195] generating shared ca certs ...
	I1018 09:32:42.396421 1468145 certs.go:227] acquiring lock for ca certs: {Name:mk38b7543cc5a8f0209b20a590824c94ad8d0609 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:32:42.396597 1468145 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.key
	I1018 09:32:42.396664 1468145 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.key
	I1018 09:32:42.396686 1468145 certs.go:257] generating profile certs ...
	I1018 09:32:42.396798 1468145 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/client.key
	I1018 09:32:42.396901 1468145 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/apiserver.key.8ee16fb5
	I1018 09:32:42.396977 1468145 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/proxy-client.key
	I1018 09:32:42.397117 1468145 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097.pem (1338 bytes)
	W1018 09:32:42.397182 1468145 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097_empty.pem, impossibly tiny 0 bytes
	I1018 09:32:42.397207 1468145 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:32:42.397267 1468145 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:32:42.397319 1468145 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:32:42.397365 1468145 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem (1675 bytes)
	I1018 09:32:42.397445 1468145 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem (1708 bytes)
	I1018 09:32:42.398066 1468145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:32:42.424633 1468145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 09:32:42.445607 1468145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:32:42.467698 1468145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:32:42.489233 1468145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 09:32:42.511526 1468145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 09:32:42.533074 1468145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:32:42.556620 1468145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 09:32:42.577521 1468145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem --> /usr/share/ca-certificates/12760972.pem (1708 bytes)
	I1018 09:32:42.600554 1468145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:32:42.623128 1468145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097.pem --> /usr/share/ca-certificates/1276097.pem (1338 bytes)
	I1018 09:32:42.642726 1468145 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:32:42.655620 1468145 ssh_runner.go:195] Run: openssl version
	I1018 09:32:42.663505 1468145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12760972.pem && ln -fs /usr/share/ca-certificates/12760972.pem /etc/ssl/certs/12760972.pem"
	I1018 09:32:42.672251 1468145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12760972.pem
	I1018 09:32:42.675788 1468145 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 08:36 /usr/share/ca-certificates/12760972.pem
	I1018 09:32:42.675895 1468145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12760972.pem
	I1018 09:32:42.719759 1468145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12760972.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:32:42.730404 1468145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:32:42.739631 1468145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:32:42.743650 1468145 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:32:42.743768 1468145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:32:42.784654 1468145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:32:42.792500 1468145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1276097.pem && ln -fs /usr/share/ca-certificates/1276097.pem /etc/ssl/certs/1276097.pem"
	I1018 09:32:42.800980 1468145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1276097.pem
	I1018 09:32:42.805027 1468145 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 08:36 /usr/share/ca-certificates/1276097.pem
	I1018 09:32:42.805091 1468145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1276097.pem
	I1018 09:32:42.845603 1468145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1276097.pem /etc/ssl/certs/51391683.0"
	I1018 09:32:42.854198 1468145 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:32:42.857875 1468145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 09:32:42.898443 1468145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 09:32:42.938896 1468145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 09:32:42.979620 1468145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 09:32:43.024164 1468145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 09:32:43.070581 1468145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 09:32:43.143916 1468145 kubeadm.go:400] StartCluster: {Name:no-preload-886951 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-886951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:32:43.144052 1468145 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:32:43.144160 1468145 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:32:43.237296 1468145 cri.go:89] found id: "52a4f82d25803437e2b4f9a5a0979d2eddfe52226bc7144054185dd64cbed59e"
	I1018 09:32:43.237359 1468145 cri.go:89] found id: "0b366ec41824a247d56af9aadad985448d0c26d9381e2243c07a327589c034da"
	I1018 09:32:43.237378 1468145 cri.go:89] found id: "0cc3656fad24ee9a111ade774682a71330029b5e0750b4e80a331f7222647630"
	I1018 09:32:43.237397 1468145 cri.go:89] found id: "8333b66cfc8ee31208ccfc044b7c62e87b44c35fc9b5f0567f504bfb9f50c42b"
	I1018 09:32:43.237416 1468145 cri.go:89] found id: ""
	I1018 09:32:43.237490 1468145 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 09:32:43.254742 1468145 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:32:43Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:32:43.254901 1468145 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:32:43.264006 1468145 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 09:32:43.264076 1468145 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 09:32:43.264156 1468145 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 09:32:43.272747 1468145 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:32:43.273631 1468145 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-886951" does not appear in /home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 09:32:43.274182 1468145 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-1274243/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-886951" cluster setting kubeconfig missing "no-preload-886951" context setting]
	I1018 09:32:43.275025 1468145 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/kubeconfig: {Name:mk4be33efdd3c492a070116d2c0b6f8b234870aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:32:43.276840 1468145 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 09:32:43.286708 1468145 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1018 09:32:43.286781 1468145 kubeadm.go:601] duration metric: took 22.685826ms to restartPrimaryControlPlane
	I1018 09:32:43.286806 1468145 kubeadm.go:402] duration metric: took 142.966922ms to StartCluster
	I1018 09:32:43.286851 1468145 settings.go:142] acquiring lock: {Name:mk4240955247baece9b9f2d9a5a8e189cb089184 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:32:43.286927 1468145 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 09:32:43.288505 1468145 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/kubeconfig: {Name:mk4be33efdd3c492a070116d2c0b6f8b234870aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:32:43.288818 1468145 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:32:43.289320 1468145 config.go:182] Loaded profile config "no-preload-886951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:32:43.289352 1468145 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:32:43.289497 1468145 addons.go:69] Setting storage-provisioner=true in profile "no-preload-886951"
	I1018 09:32:43.289518 1468145 addons.go:238] Setting addon storage-provisioner=true in "no-preload-886951"
	W1018 09:32:43.289525 1468145 addons.go:247] addon storage-provisioner should already be in state true
	I1018 09:32:43.289540 1468145 addons.go:69] Setting dashboard=true in profile "no-preload-886951"
	I1018 09:32:43.289557 1468145 addons.go:69] Setting default-storageclass=true in profile "no-preload-886951"
	I1018 09:32:43.289572 1468145 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-886951"
	I1018 09:32:43.289595 1468145 addons.go:238] Setting addon dashboard=true in "no-preload-886951"
	W1018 09:32:43.289629 1468145 addons.go:247] addon dashboard should already be in state true
	I1018 09:32:43.289665 1468145 host.go:66] Checking if "no-preload-886951" exists ...
	I1018 09:32:43.289875 1468145 cli_runner.go:164] Run: docker container inspect no-preload-886951 --format={{.State.Status}}
	I1018 09:32:43.290342 1468145 cli_runner.go:164] Run: docker container inspect no-preload-886951 --format={{.State.Status}}
	I1018 09:32:43.289553 1468145 host.go:66] Checking if "no-preload-886951" exists ...
	I1018 09:32:43.293295 1468145 cli_runner.go:164] Run: docker container inspect no-preload-886951 --format={{.State.Status}}
	I1018 09:32:43.298455 1468145 out.go:179] * Verifying Kubernetes components...
	I1018 09:32:43.307704 1468145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:32:43.352181 1468145 addons.go:238] Setting addon default-storageclass=true in "no-preload-886951"
	W1018 09:32:43.352211 1468145 addons.go:247] addon default-storageclass should already be in state true
	I1018 09:32:43.352237 1468145 host.go:66] Checking if "no-preload-886951" exists ...
	I1018 09:32:43.352652 1468145 cli_runner.go:164] Run: docker container inspect no-preload-886951 --format={{.State.Status}}
	I1018 09:32:43.363215 1468145 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 09:32:43.366704 1468145 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 09:32:43.369674 1468145 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 09:32:43.369698 1468145 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 09:32:43.369768 1468145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-886951
	I1018 09:32:43.375069 1468145 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:32:43.378029 1468145 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:32:43.378050 1468145 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:32:43.378120 1468145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-886951
	I1018 09:32:43.436883 1468145 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:32:43.436904 1468145 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:32:43.436978 1468145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-886951
	I1018 09:32:43.445398 1468145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34891 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/no-preload-886951/id_rsa Username:docker}
	I1018 09:32:43.458142 1468145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34891 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/no-preload-886951/id_rsa Username:docker}
	I1018 09:32:43.473574 1468145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34891 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/no-preload-886951/id_rsa Username:docker}
	I1018 09:32:43.658537 1468145 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:32:43.713026 1468145 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:32:43.757224 1468145 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:32:43.758763 1468145 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 09:32:43.758834 1468145 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 09:32:43.812011 1468145 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 09:32:43.812084 1468145 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 09:32:43.867571 1468145 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 09:32:43.867647 1468145 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 09:32:43.945878 1468145 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 09:32:43.945947 1468145 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 09:32:44.014012 1468145 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 09:32:44.014091 1468145 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 09:32:44.030088 1468145 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 09:32:44.030166 1468145 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 09:32:44.046693 1468145 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 09:32:44.046770 1468145 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 09:32:44.061550 1468145 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 09:32:44.061625 1468145 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 09:32:44.076836 1468145 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:32:44.076912 1468145 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 09:32:44.091089 1468145 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	
	
	==> CRI-O <==
	Oct 18 09:32:36 embed-certs-559379 crio[835]: time="2025-10-18T09:32:36.975814322Z" level=info msg="Created container 08a71886bf39e99517a89f939aededa3b5d6155457173de12cc5b4171ee162d0: kube-system/coredns-66bc5c9577-t9blq/coredns" id=beefcc71-2209-427d-8f7e-f9981d7dce7c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:32:36 embed-certs-559379 crio[835]: time="2025-10-18T09:32:36.976945712Z" level=info msg="Starting container: 08a71886bf39e99517a89f939aededa3b5d6155457173de12cc5b4171ee162d0" id=9600770f-8835-4655-a40e-f189c54bb8f3 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:32:36 embed-certs-559379 crio[835]: time="2025-10-18T09:32:36.984280589Z" level=info msg="Started container" PID=1747 containerID=08a71886bf39e99517a89f939aededa3b5d6155457173de12cc5b4171ee162d0 description=kube-system/coredns-66bc5c9577-t9blq/coredns id=9600770f-8835-4655-a40e-f189c54bb8f3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=28c9161962b26ee2724665572d93f18640038d25cc28b0808e2cf898cd30dbfd
	Oct 18 09:32:39 embed-certs-559379 crio[835]: time="2025-10-18T09:32:39.721276982Z" level=info msg="Running pod sandbox: default/busybox/POD" id=5635f94e-af5d-4b2c-bd77-065b43d24ce5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:32:39 embed-certs-559379 crio[835]: time="2025-10-18T09:32:39.721354518Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:32:39 embed-certs-559379 crio[835]: time="2025-10-18T09:32:39.730583233Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:650206ac649f109487badd341ebb466f38ad3cfd31e27f31d282d7e5241c0bfe UID:a8bb1150-d2bb-4277-91a3-9eb18dfdfc48 NetNS:/var/run/netns/3c99f106-27a8-4690-ae9f-53498804c77f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000789d0}] Aliases:map[]}"
	Oct 18 09:32:39 embed-certs-559379 crio[835]: time="2025-10-18T09:32:39.73202714Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 18 09:32:39 embed-certs-559379 crio[835]: time="2025-10-18T09:32:39.747609262Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:650206ac649f109487badd341ebb466f38ad3cfd31e27f31d282d7e5241c0bfe UID:a8bb1150-d2bb-4277-91a3-9eb18dfdfc48 NetNS:/var/run/netns/3c99f106-27a8-4690-ae9f-53498804c77f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000789d0}] Aliases:map[]}"
	Oct 18 09:32:39 embed-certs-559379 crio[835]: time="2025-10-18T09:32:39.748077737Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 18 09:32:39 embed-certs-559379 crio[835]: time="2025-10-18T09:32:39.751448753Z" level=info msg="Ran pod sandbox 650206ac649f109487badd341ebb466f38ad3cfd31e27f31d282d7e5241c0bfe with infra container: default/busybox/POD" id=5635f94e-af5d-4b2c-bd77-065b43d24ce5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:32:39 embed-certs-559379 crio[835]: time="2025-10-18T09:32:39.753785057Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2076b3cd-c5de-4c59-bfc9-21a831c28696 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:32:39 embed-certs-559379 crio[835]: time="2025-10-18T09:32:39.75391476Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=2076b3cd-c5de-4c59-bfc9-21a831c28696 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:32:39 embed-certs-559379 crio[835]: time="2025-10-18T09:32:39.753949959Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=2076b3cd-c5de-4c59-bfc9-21a831c28696 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:32:39 embed-certs-559379 crio[835]: time="2025-10-18T09:32:39.759364011Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6a30dc2f-bb64-43ef-b344-aeb900ccbffd name=/runtime.v1.ImageService/PullImage
	Oct 18 09:32:39 embed-certs-559379 crio[835]: time="2025-10-18T09:32:39.765256425Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 18 09:32:41 embed-certs-559379 crio[835]: time="2025-10-18T09:32:41.820287836Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=6a30dc2f-bb64-43ef-b344-aeb900ccbffd name=/runtime.v1.ImageService/PullImage
	Oct 18 09:32:41 embed-certs-559379 crio[835]: time="2025-10-18T09:32:41.821024037Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e94530ae-4eaa-46af-8fc8-0c2fa3283037 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:32:41 embed-certs-559379 crio[835]: time="2025-10-18T09:32:41.82289642Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b693e52f-8367-46fd-86f6-0716b63c2806 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:32:41 embed-certs-559379 crio[835]: time="2025-10-18T09:32:41.828359201Z" level=info msg="Creating container: default/busybox/busybox" id=360bc5bc-15c7-418c-9172-d2489d464504 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:32:41 embed-certs-559379 crio[835]: time="2025-10-18T09:32:41.829447655Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:32:41 embed-certs-559379 crio[835]: time="2025-10-18T09:32:41.834313587Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:32:41 embed-certs-559379 crio[835]: time="2025-10-18T09:32:41.834746757Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:32:41 embed-certs-559379 crio[835]: time="2025-10-18T09:32:41.854208516Z" level=info msg="Created container 83700ad49bae936bc5d60124149edb3b4ab8e26e1c661345ab4a20346774050e: default/busybox/busybox" id=360bc5bc-15c7-418c-9172-d2489d464504 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:32:41 embed-certs-559379 crio[835]: time="2025-10-18T09:32:41.855039779Z" level=info msg="Starting container: 83700ad49bae936bc5d60124149edb3b4ab8e26e1c661345ab4a20346774050e" id=041a5371-9ac1-4a39-94ae-2c25899c32c7 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:32:41 embed-certs-559379 crio[835]: time="2025-10-18T09:32:41.859800147Z" level=info msg="Started container" PID=1802 containerID=83700ad49bae936bc5d60124149edb3b4ab8e26e1c661345ab4a20346774050e description=default/busybox/busybox id=041a5371-9ac1-4a39-94ae-2c25899c32c7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=650206ac649f109487badd341ebb466f38ad3cfd31e27f31d282d7e5241c0bfe
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	83700ad49bae9       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago        Running             busybox                   0                   650206ac649f1       busybox                                      default
	08a71886bf39e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago       Running             coredns                   0                   28c9161962b26       coredns-66bc5c9577-t9blq                     kube-system
	1eb7e3b4bbfdc       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago       Running             storage-provisioner       0                   2e364c3270604       storage-provisioner                          kube-system
	28cb1476c8c02       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      55 seconds ago       Running             kindnet-cni               0                   006f02624eccf       kindnet-6ltrq                                kube-system
	f87b1ed56129a       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      55 seconds ago       Running             kube-proxy                0                   2d6cb051ef05c       kube-proxy-82pzn                             kube-system
	d069db0b86b51       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   63fd652205457       kube-apiserver-embed-certs-559379            kube-system
	c3a4dc8e1074e       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   43e26ede11fe7       kube-controller-manager-embed-certs-559379   kube-system
	dcbae4c67ce70       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   efb15a8520635       kube-scheduler-embed-certs-559379            kube-system
	a318ac66248e7       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   9ed5ce8c8d601       etcd-embed-certs-559379                      kube-system
	
	
	==> coredns [08a71886bf39e99517a89f939aededa3b5d6155457173de12cc5b4171ee162d0] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43378 - 18575 "HINFO IN 8899394842932696354.8394550703974045074. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023186038s
	
	
	==> describe nodes <==
	Name:               embed-certs-559379
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-559379
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820
	                    minikube.k8s.io/name=embed-certs-559379
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_31_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:31:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-559379
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:32:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:32:50 +0000   Sat, 18 Oct 2025 09:31:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:32:50 +0000   Sat, 18 Oct 2025 09:31:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:32:50 +0000   Sat, 18 Oct 2025 09:31:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:32:50 +0000   Sat, 18 Oct 2025 09:32:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-559379
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                963b98db-af62-4b5f-9ed9-d04f81062030
	  Boot ID:                    65523ab2-bf15-4ba4-9086-da57024e96a9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-t9blq                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     56s
	  kube-system                 etcd-embed-certs-559379                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         63s
	  kube-system                 kindnet-6ltrq                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-embed-certs-559379             250m (12%)    0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 kube-controller-manager-embed-certs-559379    200m (10%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-proxy-82pzn                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-embed-certs-559379             100m (5%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 54s                kube-proxy       
	  Normal   NodeHasSufficientMemory  73s (x8 over 73s)  kubelet          Node embed-certs-559379 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    73s (x8 over 73s)  kubelet          Node embed-certs-559379 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     73s (x8 over 73s)  kubelet          Node embed-certs-559379 status is now: NodeHasSufficientPID
	  Normal   Starting                 62s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s                kubelet          Node embed-certs-559379 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s                kubelet          Node embed-certs-559379 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s                kubelet          Node embed-certs-559379 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s                node-controller  Node embed-certs-559379 event: Registered Node embed-certs-559379 in Controller
	  Normal   NodeReady                14s                kubelet          Node embed-certs-559379 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct18 09:10] overlayfs: idmapped layers are currently not supported
	[Oct18 09:11] overlayfs: idmapped layers are currently not supported
	[Oct18 09:12] overlayfs: idmapped layers are currently not supported
	[Oct18 09:13] overlayfs: idmapped layers are currently not supported
	[  +9.741593] overlayfs: idmapped layers are currently not supported
	[Oct18 09:14] overlayfs: idmapped layers are currently not supported
	[ +29.155522] overlayfs: idmapped layers are currently not supported
	[Oct18 09:15] overlayfs: idmapped layers are currently not supported
	[ +11.661984] kauditd_printk_skb: 8 callbacks suppressed
	[ +12.494912] overlayfs: idmapped layers are currently not supported
	[Oct18 09:16] overlayfs: idmapped layers are currently not supported
	[Oct18 09:18] overlayfs: idmapped layers are currently not supported
	[Oct18 09:20] overlayfs: idmapped layers are currently not supported
	[  +1.396393] overlayfs: idmapped layers are currently not supported
	[Oct18 09:22] overlayfs: idmapped layers are currently not supported
	[ +27.782946] overlayfs: idmapped layers are currently not supported
	[Oct18 09:25] overlayfs: idmapped layers are currently not supported
	[Oct18 09:26] overlayfs: idmapped layers are currently not supported
	[Oct18 09:27] overlayfs: idmapped layers are currently not supported
	[Oct18 09:28] overlayfs: idmapped layers are currently not supported
	[ +34.309418] overlayfs: idmapped layers are currently not supported
	[Oct18 09:30] overlayfs: idmapped layers are currently not supported
	[Oct18 09:31] overlayfs: idmapped layers are currently not supported
	[ +30.479187] overlayfs: idmapped layers are currently not supported
	[Oct18 09:32] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [a318ac66248e76911fab7a4392c66e3abdf3a819b5b85b6f258d93b4de975c2e] <==
	{"level":"warn","ts":"2025-10-18T09:31:42.298774Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:42.342672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:42.416210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:42.463153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:42.488761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:42.568027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:42.631929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:42.689038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:42.749874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:42.820486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:42.879411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:42.944068Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:43.032638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:43.066773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:43.153340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:43.228009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:43.257182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:43.322536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:43.376041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:43.446300Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:43.490009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:43.568286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:43.620089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:43.687315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:31:43.964870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55774","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:32:50 up 11:15,  0 user,  load average: 3.81, 3.42, 2.71
	Linux embed-certs-559379 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [28cb1476c8c028438fbc8798fc6cf2aa7e5d29a96beaf9b8e99be848ae993ea2] <==
	I1018 09:31:55.526601       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:31:55.533419       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 09:31:55.533552       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:31:55.533565       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:31:55.533582       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:31:55Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:31:55.730552       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:31:55.730570       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:31:55.730578       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:31:55.730861       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 09:32:25.731092       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1018 09:32:25.731254       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 09:32:25.731313       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 09:32:25.731375       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1018 09:32:27.331049       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:32:27.331079       1 metrics.go:72] Registering metrics
	I1018 09:32:27.331137       1 controller.go:711] "Syncing nftables rules"
	I1018 09:32:35.736646       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:32:35.736697       1 main.go:301] handling current node
	I1018 09:32:45.731984       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:32:45.732034       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d069db0b86b51ac965528cb49aa3a8aaf5680ccc20d1de288c2804d15cf85d4c] <==
	I1018 09:31:45.841371       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1018 09:31:45.842398       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 09:31:45.868682       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:31:45.868836       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 09:31:45.914048       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:31:45.917659       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 09:31:46.048616       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:31:46.251518       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 09:31:46.266463       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 09:31:46.266536       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:31:47.398262       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:31:47.463649       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:31:47.549763       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 09:31:47.558232       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1018 09:31:47.559565       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 09:31:47.564729       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 09:31:48.490017       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 09:31:48.615362       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 09:31:48.729063       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 09:31:48.754552       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 09:31:54.191129       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 09:31:54.355538       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1018 09:31:54.657966       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:31:54.675235       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1018 09:32:48.612053       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:41288: use of closed network connection
	
	
	==> kube-controller-manager [c3a4dc8e1074e682a31bf0a34a4c405f1be17cb248e392b4f2cd051f47e02007] <==
	I1018 09:31:53.613820       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 09:31:53.623329       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 09:31:53.629036       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:31:53.629984       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 09:31:53.630679       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 09:31:53.630769       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 09:31:53.630831       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-559379"
	I1018 09:31:53.630880       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1018 09:31:53.639961       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1018 09:31:53.640328       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 09:31:53.640376       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 09:31:53.640419       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 09:31:53.640627       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 09:31:53.640783       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 09:31:53.640861       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 09:31:53.640918       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 09:31:53.647814       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 09:31:53.649033       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:31:53.649053       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 09:31:53.649059       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 09:31:53.649135       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:31:53.658418       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:31:53.682234       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 09:31:53.720277       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-559379" podCIDRs=["10.244.0.0/24"]
	I1018 09:32:38.636718       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [f87b1ed56129a5bfc30e15001f39a9b93c3c6af3a6cb8a6c0a9eacedb53244c0] <==
	I1018 09:31:55.504029       1 server_linux.go:53] "Using iptables proxy"
	I1018 09:31:55.654465       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 09:31:55.755988       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:31:55.756029       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 09:31:55.756104       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:31:55.871108       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:31:55.871194       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:31:55.884038       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:31:55.894648       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:31:55.894674       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:31:55.905263       1 config.go:200] "Starting service config controller"
	I1018 09:31:55.905283       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:31:55.905299       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:31:55.905305       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:31:55.905316       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:31:55.905320       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:31:55.906278       1 config.go:309] "Starting node config controller"
	I1018 09:31:55.906288       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:31:55.906295       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 09:31:56.005548       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 09:31:56.005590       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 09:31:56.005672       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [dcbae4c67ce70782d8bec1b029005fd053910023b172100a1a128cbaf04382b8] <==
	I1018 09:31:46.389363       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:31:46.399983       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 09:31:46.400182       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:31:46.400205       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:31:46.400221       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1018 09:31:46.417025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 09:31:46.417165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 09:31:46.417285       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 09:31:46.417324       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 09:31:46.417360       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 09:31:46.420126       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 09:31:46.420879       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 09:31:46.421234       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 09:31:46.422743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 09:31:46.422807       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 09:31:46.422891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 09:31:46.422953       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 09:31:46.423044       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 09:31:46.423080       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 09:31:46.423155       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 09:31:46.423172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 09:31:46.423419       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 09:31:46.423465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 09:31:46.423516       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1018 09:31:47.701130       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:31:49 embed-certs-559379 kubelet[1311]: E1018 09:31:49.931654    1311 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-embed-certs-559379\" already exists" pod="kube-system/etcd-embed-certs-559379"
	Oct 18 09:31:53 embed-certs-559379 kubelet[1311]: I1018 09:31:53.712337    1311 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 18 09:31:53 embed-certs-559379 kubelet[1311]: I1018 09:31:53.713475    1311 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 18 09:31:54 embed-certs-559379 kubelet[1311]: I1018 09:31:54.536748    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bj9k\" (UniqueName: \"kubernetes.io/projected/4d204191-f23a-4031-a37d-a4c1ec529e4c-kube-api-access-6bj9k\") pod \"kube-proxy-82pzn\" (UID: \"4d204191-f23a-4031-a37d-a4c1ec529e4c\") " pod="kube-system/kube-proxy-82pzn"
	Oct 18 09:31:54 embed-certs-559379 kubelet[1311]: I1018 09:31:54.536800    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca80e038-38ba-42a6-8275-fcc38916c7ca-lib-modules\") pod \"kindnet-6ltrq\" (UID: \"ca80e038-38ba-42a6-8275-fcc38916c7ca\") " pod="kube-system/kindnet-6ltrq"
	Oct 18 09:31:54 embed-certs-559379 kubelet[1311]: I1018 09:31:54.536823    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d204191-f23a-4031-a37d-a4c1ec529e4c-lib-modules\") pod \"kube-proxy-82pzn\" (UID: \"4d204191-f23a-4031-a37d-a4c1ec529e4c\") " pod="kube-system/kube-proxy-82pzn"
	Oct 18 09:31:54 embed-certs-559379 kubelet[1311]: I1018 09:31:54.536840    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/ca80e038-38ba-42a6-8275-fcc38916c7ca-cni-cfg\") pod \"kindnet-6ltrq\" (UID: \"ca80e038-38ba-42a6-8275-fcc38916c7ca\") " pod="kube-system/kindnet-6ltrq"
	Oct 18 09:31:54 embed-certs-559379 kubelet[1311]: I1018 09:31:54.536857    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blkzd\" (UniqueName: \"kubernetes.io/projected/ca80e038-38ba-42a6-8275-fcc38916c7ca-kube-api-access-blkzd\") pod \"kindnet-6ltrq\" (UID: \"ca80e038-38ba-42a6-8275-fcc38916c7ca\") " pod="kube-system/kindnet-6ltrq"
	Oct 18 09:31:54 embed-certs-559379 kubelet[1311]: I1018 09:31:54.536880    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d204191-f23a-4031-a37d-a4c1ec529e4c-xtables-lock\") pod \"kube-proxy-82pzn\" (UID: \"4d204191-f23a-4031-a37d-a4c1ec529e4c\") " pod="kube-system/kube-proxy-82pzn"
	Oct 18 09:31:54 embed-certs-559379 kubelet[1311]: I1018 09:31:54.536919    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ca80e038-38ba-42a6-8275-fcc38916c7ca-xtables-lock\") pod \"kindnet-6ltrq\" (UID: \"ca80e038-38ba-42a6-8275-fcc38916c7ca\") " pod="kube-system/kindnet-6ltrq"
	Oct 18 09:31:54 embed-certs-559379 kubelet[1311]: I1018 09:31:54.536938    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4d204191-f23a-4031-a37d-a4c1ec529e4c-kube-proxy\") pod \"kube-proxy-82pzn\" (UID: \"4d204191-f23a-4031-a37d-a4c1ec529e4c\") " pod="kube-system/kube-proxy-82pzn"
	Oct 18 09:31:54 embed-certs-559379 kubelet[1311]: I1018 09:31:54.748500    1311 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 18 09:31:54 embed-certs-559379 kubelet[1311]: W1018 09:31:54.866904    1311 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/28d5892e22acbb4aee9ab8966787a91744522fa04b863c7570f04701f5c19fa0/crio-2d6cb051ef05cf6059d595cfa0da21d383acfe6b4dcfcf21772e62d13e6403e1 WatchSource:0}: Error finding container 2d6cb051ef05cf6059d595cfa0da21d383acfe6b4dcfcf21772e62d13e6403e1: Status 404 returned error can't find the container with id 2d6cb051ef05cf6059d595cfa0da21d383acfe6b4dcfcf21772e62d13e6403e1
	Oct 18 09:31:55 embed-certs-559379 kubelet[1311]: W1018 09:31:55.143316    1311 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/28d5892e22acbb4aee9ab8966787a91744522fa04b863c7570f04701f5c19fa0/crio-006f02624eccfaa2c3fa8d579013c984d7450a95311e0edbc0c6565309fd96bf WatchSource:0}: Error finding container 006f02624eccfaa2c3fa8d579013c984d7450a95311e0edbc0c6565309fd96bf: Status 404 returned error can't find the container with id 006f02624eccfaa2c3fa8d579013c984d7450a95311e0edbc0c6565309fd96bf
	Oct 18 09:31:56 embed-certs-559379 kubelet[1311]: I1018 09:31:56.040874    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-82pzn" podStartSLOduration=2.040853962 podStartE2EDuration="2.040853962s" podCreationTimestamp="2025-10-18 09:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:31:56.040365787 +0000 UTC m=+7.609362169" watchObservedRunningTime="2025-10-18 09:31:56.040853962 +0000 UTC m=+7.609850336"
	Oct 18 09:31:56 embed-certs-559379 kubelet[1311]: I1018 09:31:56.117381    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-6ltrq" podStartSLOduration=2.117364037 podStartE2EDuration="2.117364037s" podCreationTimestamp="2025-10-18 09:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:31:56.11715619 +0000 UTC m=+7.686152572" watchObservedRunningTime="2025-10-18 09:31:56.117364037 +0000 UTC m=+7.686360411"
	Oct 18 09:32:36 embed-certs-559379 kubelet[1311]: I1018 09:32:36.242864    1311 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 18 09:32:36 embed-certs-559379 kubelet[1311]: I1018 09:32:36.410117    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/0e85b72f-adef-4429-bf1f-1f003538e5bb-tmp\") pod \"storage-provisioner\" (UID: \"0e85b72f-adef-4429-bf1f-1f003538e5bb\") " pod="kube-system/storage-provisioner"
	Oct 18 09:32:36 embed-certs-559379 kubelet[1311]: I1018 09:32:36.410375    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpxvw\" (UniqueName: \"kubernetes.io/projected/0e85b72f-adef-4429-bf1f-1f003538e5bb-kube-api-access-fpxvw\") pod \"storage-provisioner\" (UID: \"0e85b72f-adef-4429-bf1f-1f003538e5bb\") " pod="kube-system/storage-provisioner"
	Oct 18 09:32:36 embed-certs-559379 kubelet[1311]: I1018 09:32:36.512408    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/07dead7a-c196-4355-8e63-d7dbe47b07cc-config-volume\") pod \"coredns-66bc5c9577-t9blq\" (UID: \"07dead7a-c196-4355-8e63-d7dbe47b07cc\") " pod="kube-system/coredns-66bc5c9577-t9blq"
	Oct 18 09:32:36 embed-certs-559379 kubelet[1311]: I1018 09:32:36.512494    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr9ls\" (UniqueName: \"kubernetes.io/projected/07dead7a-c196-4355-8e63-d7dbe47b07cc-kube-api-access-xr9ls\") pod \"coredns-66bc5c9577-t9blq\" (UID: \"07dead7a-c196-4355-8e63-d7dbe47b07cc\") " pod="kube-system/coredns-66bc5c9577-t9blq"
	Oct 18 09:32:36 embed-certs-559379 kubelet[1311]: W1018 09:32:36.943360    1311 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/28d5892e22acbb4aee9ab8966787a91744522fa04b863c7570f04701f5c19fa0/crio-28c9161962b26ee2724665572d93f18640038d25cc28b0808e2cf898cd30dbfd WatchSource:0}: Error finding container 28c9161962b26ee2724665572d93f18640038d25cc28b0808e2cf898cd30dbfd: Status 404 returned error can't find the container with id 28c9161962b26ee2724665572d93f18640038d25cc28b0808e2cf898cd30dbfd
	Oct 18 09:32:37 embed-certs-559379 kubelet[1311]: I1018 09:32:37.155753    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-t9blq" podStartSLOduration=43.155732289 podStartE2EDuration="43.155732289s" podCreationTimestamp="2025-10-18 09:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:32:37.136342594 +0000 UTC m=+48.705338984" watchObservedRunningTime="2025-10-18 09:32:37.155732289 +0000 UTC m=+48.724728663"
	Oct 18 09:32:37 embed-certs-559379 kubelet[1311]: I1018 09:32:37.172598    1311 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.172579254 podStartE2EDuration="41.172579254s" podCreationTimestamp="2025-10-18 09:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:32:37.156614973 +0000 UTC m=+48.725611355" watchObservedRunningTime="2025-10-18 09:32:37.172579254 +0000 UTC m=+48.741575636"
	Oct 18 09:32:39 embed-certs-559379 kubelet[1311]: I1018 09:32:39.433478    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmv7c\" (UniqueName: \"kubernetes.io/projected/a8bb1150-d2bb-4277-91a3-9eb18dfdfc48-kube-api-access-qmv7c\") pod \"busybox\" (UID: \"a8bb1150-d2bb-4277-91a3-9eb18dfdfc48\") " pod="default/busybox"
	
	
	==> storage-provisioner [1eb7e3b4bbfdc87586b321047a6a8f5e93ee8c3f0197977af4ff4c0f98ea8837] <==
	I1018 09:32:36.654223       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 09:32:36.678832       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 09:32:36.680041       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 09:32:36.683887       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:32:36.692669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:32:36.692979       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 09:32:36.693515       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-559379_6d09ddd0-644f-4934-86b5-c018fdd6743d!
	I1018 09:32:36.694844       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a2540f48-50f2-4174-a7e5-a267c71bfb5e", APIVersion:"v1", ResourceVersion:"463", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-559379_6d09ddd0-644f-4934-86b5-c018fdd6743d became leader
	W1018 09:32:36.704095       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:32:36.706927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:32:36.794582       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-559379_6d09ddd0-644f-4934-86b5-c018fdd6743d!
	W1018 09:32:38.710308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:32:38.714459       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:32:40.717695       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:32:40.724415       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:32:42.727535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:32:42.732730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:32:44.735098       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:32:44.742172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:32:46.745160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:32:46.749517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:32:48.759823       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:32:48.771582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:32:50.775958       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:32:50.784637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-559379 -n embed-certs-559379
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-559379 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-886951 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-886951 --alsologtostderr -v=1: exit status 80 (1.92526822s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-886951 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:33:38.848877 1473298 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:33:38.849110 1473298 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:33:38.849142 1473298 out.go:374] Setting ErrFile to fd 2...
	I1018 09:33:38.849161 1473298 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:33:38.849452 1473298 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 09:33:38.849752 1473298 out.go:368] Setting JSON to false
	I1018 09:33:38.849802 1473298 mustload.go:65] Loading cluster: no-preload-886951
	I1018 09:33:38.850201 1473298 config.go:182] Loaded profile config "no-preload-886951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:33:38.850738 1473298 cli_runner.go:164] Run: docker container inspect no-preload-886951 --format={{.State.Status}}
	I1018 09:33:38.868306 1473298 host.go:66] Checking if "no-preload-886951" exists ...
	I1018 09:33:38.868616 1473298 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:33:38.926851 1473298 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-18 09:33:38.91682558 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:33:38.927513 1473298 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-886951 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1018 09:33:38.930805 1473298 out.go:179] * Pausing node no-preload-886951 ... 
	I1018 09:33:38.933824 1473298 host.go:66] Checking if "no-preload-886951" exists ...
	I1018 09:33:38.934203 1473298 ssh_runner.go:195] Run: systemctl --version
	I1018 09:33:38.934254 1473298 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-886951
	I1018 09:33:38.952932 1473298 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34891 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/no-preload-886951/id_rsa Username:docker}
	I1018 09:33:39.058448 1473298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:33:39.085163 1473298 pause.go:52] kubelet running: true
	I1018 09:33:39.085260 1473298 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:33:39.367258 1473298 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:33:39.367350 1473298 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:33:39.435085 1473298 cri.go:89] found id: "4c32c80c458b3f70a26b5257c09d568bd64ccf7a157833322483f6a31d1a7512"
	I1018 09:33:39.435148 1473298 cri.go:89] found id: "f505975793d59951a10cdd493040db58400af6b79be4d56832adb20fa9f0f241"
	I1018 09:33:39.435166 1473298 cri.go:89] found id: "306e8654ef730e9d2a05a0f8eb73d98594c93f8ae0707c0c01c6dafb942fbf13"
	I1018 09:33:39.435185 1473298 cri.go:89] found id: "94b1e528e52b0ea9b1d5837d3abfb4568b7db6f71a5853e75fe390f34c4c6734"
	I1018 09:33:39.435203 1473298 cri.go:89] found id: "c1c8eeb2955365fe9513d621ef316f0153e8d1875eecd9d5277bde4191548620"
	I1018 09:33:39.435235 1473298 cri.go:89] found id: "52a4f82d25803437e2b4f9a5a0979d2eddfe52226bc7144054185dd64cbed59e"
	I1018 09:33:39.435258 1473298 cri.go:89] found id: "0b366ec41824a247d56af9aadad985448d0c26d9381e2243c07a327589c034da"
	I1018 09:33:39.435278 1473298 cri.go:89] found id: "0cc3656fad24ee9a111ade774682a71330029b5e0750b4e80a331f7222647630"
	I1018 09:33:39.435296 1473298 cri.go:89] found id: "8333b66cfc8ee31208ccfc044b7c62e87b44c35fc9b5f0567f504bfb9f50c42b"
	I1018 09:33:39.435319 1473298 cri.go:89] found id: "6fe9487b048d325f2717ac2cb4ee4019221e040bc84f42632e57d631c864c681"
	I1018 09:33:39.435342 1473298 cri.go:89] found id: "3c99758e0b671868c66b76b5f6341a5c4d2743886ca97fd1ad90d31b840aeea0"
	I1018 09:33:39.435363 1473298 cri.go:89] found id: ""
	I1018 09:33:39.435450 1473298 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:33:39.453911 1473298 retry.go:31] will retry after 364.245455ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:33:39Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:33:39.819347 1473298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:33:39.832525 1473298 pause.go:52] kubelet running: false
	I1018 09:33:39.832610 1473298 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:33:39.996751 1473298 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:33:39.996830 1473298 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:33:40.078413 1473298 cri.go:89] found id: "4c32c80c458b3f70a26b5257c09d568bd64ccf7a157833322483f6a31d1a7512"
	I1018 09:33:40.078437 1473298 cri.go:89] found id: "f505975793d59951a10cdd493040db58400af6b79be4d56832adb20fa9f0f241"
	I1018 09:33:40.078443 1473298 cri.go:89] found id: "306e8654ef730e9d2a05a0f8eb73d98594c93f8ae0707c0c01c6dafb942fbf13"
	I1018 09:33:40.078446 1473298 cri.go:89] found id: "94b1e528e52b0ea9b1d5837d3abfb4568b7db6f71a5853e75fe390f34c4c6734"
	I1018 09:33:40.078450 1473298 cri.go:89] found id: "c1c8eeb2955365fe9513d621ef316f0153e8d1875eecd9d5277bde4191548620"
	I1018 09:33:40.078454 1473298 cri.go:89] found id: "52a4f82d25803437e2b4f9a5a0979d2eddfe52226bc7144054185dd64cbed59e"
	I1018 09:33:40.078457 1473298 cri.go:89] found id: "0b366ec41824a247d56af9aadad985448d0c26d9381e2243c07a327589c034da"
	I1018 09:33:40.078461 1473298 cri.go:89] found id: "0cc3656fad24ee9a111ade774682a71330029b5e0750b4e80a331f7222647630"
	I1018 09:33:40.078464 1473298 cri.go:89] found id: "8333b66cfc8ee31208ccfc044b7c62e87b44c35fc9b5f0567f504bfb9f50c42b"
	I1018 09:33:40.078470 1473298 cri.go:89] found id: "6fe9487b048d325f2717ac2cb4ee4019221e040bc84f42632e57d631c864c681"
	I1018 09:33:40.078474 1473298 cri.go:89] found id: "3c99758e0b671868c66b76b5f6341a5c4d2743886ca97fd1ad90d31b840aeea0"
	I1018 09:33:40.078477 1473298 cri.go:89] found id: ""
	I1018 09:33:40.078529 1473298 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:33:40.090287 1473298 retry.go:31] will retry after 332.019711ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:33:40Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:33:40.422628 1473298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:33:40.435494 1473298 pause.go:52] kubelet running: false
	I1018 09:33:40.435581 1473298 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:33:40.609538 1473298 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:33:40.609613 1473298 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:33:40.683370 1473298 cri.go:89] found id: "4c32c80c458b3f70a26b5257c09d568bd64ccf7a157833322483f6a31d1a7512"
	I1018 09:33:40.683400 1473298 cri.go:89] found id: "f505975793d59951a10cdd493040db58400af6b79be4d56832adb20fa9f0f241"
	I1018 09:33:40.683406 1473298 cri.go:89] found id: "306e8654ef730e9d2a05a0f8eb73d98594c93f8ae0707c0c01c6dafb942fbf13"
	I1018 09:33:40.683410 1473298 cri.go:89] found id: "94b1e528e52b0ea9b1d5837d3abfb4568b7db6f71a5853e75fe390f34c4c6734"
	I1018 09:33:40.683414 1473298 cri.go:89] found id: "c1c8eeb2955365fe9513d621ef316f0153e8d1875eecd9d5277bde4191548620"
	I1018 09:33:40.683418 1473298 cri.go:89] found id: "52a4f82d25803437e2b4f9a5a0979d2eddfe52226bc7144054185dd64cbed59e"
	I1018 09:33:40.683422 1473298 cri.go:89] found id: "0b366ec41824a247d56af9aadad985448d0c26d9381e2243c07a327589c034da"
	I1018 09:33:40.683426 1473298 cri.go:89] found id: "0cc3656fad24ee9a111ade774682a71330029b5e0750b4e80a331f7222647630"
	I1018 09:33:40.683429 1473298 cri.go:89] found id: "8333b66cfc8ee31208ccfc044b7c62e87b44c35fc9b5f0567f504bfb9f50c42b"
	I1018 09:33:40.683435 1473298 cri.go:89] found id: "6fe9487b048d325f2717ac2cb4ee4019221e040bc84f42632e57d631c864c681"
	I1018 09:33:40.683444 1473298 cri.go:89] found id: "3c99758e0b671868c66b76b5f6341a5c4d2743886ca97fd1ad90d31b840aeea0"
	I1018 09:33:40.683447 1473298 cri.go:89] found id: ""
	I1018 09:33:40.683496 1473298 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:33:40.698241 1473298 out.go:203] 
	W1018 09:33:40.701139 1473298 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:33:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:33:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:33:40.701159 1473298 out.go:285] * 
	* 
	W1018 09:33:40.711046 1473298 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:33:40.713985 1473298 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-886951 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-886951
helpers_test.go:243: (dbg) docker inspect no-preload-886951:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "53265fd5269c62ddd549523e48a4341815211a2547f2f5f74191b59029ce1244",
	        "Created": "2025-10-18T09:30:57.518122221Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1468274,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:32:35.619107214Z",
	            "FinishedAt": "2025-10-18T09:32:34.792023238Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/53265fd5269c62ddd549523e48a4341815211a2547f2f5f74191b59029ce1244/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/53265fd5269c62ddd549523e48a4341815211a2547f2f5f74191b59029ce1244/hostname",
	        "HostsPath": "/var/lib/docker/containers/53265fd5269c62ddd549523e48a4341815211a2547f2f5f74191b59029ce1244/hosts",
	        "LogPath": "/var/lib/docker/containers/53265fd5269c62ddd549523e48a4341815211a2547f2f5f74191b59029ce1244/53265fd5269c62ddd549523e48a4341815211a2547f2f5f74191b59029ce1244-json.log",
	        "Name": "/no-preload-886951",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-886951:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-886951",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "53265fd5269c62ddd549523e48a4341815211a2547f2f5f74191b59029ce1244",
	                "LowerDir": "/var/lib/docker/overlay2/dabda70568a757d99fa96e3a83030950f504116ab4097bb4c8b4336b13256c1f-init/diff:/var/lib/docker/overlay2/60519750c2737db0dfdb37adf468f6414129c2b5dcb7218ea59afbc63bfd537f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dabda70568a757d99fa96e3a83030950f504116ab4097bb4c8b4336b13256c1f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dabda70568a757d99fa96e3a83030950f504116ab4097bb4c8b4336b13256c1f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dabda70568a757d99fa96e3a83030950f504116ab4097bb4c8b4336b13256c1f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-886951",
	                "Source": "/var/lib/docker/volumes/no-preload-886951/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-886951",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-886951",
	                "name.minikube.sigs.k8s.io": "no-preload-886951",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2b5ccb14c46356b20c59d22612f2656312c5aab6841f54604aa454aaaaa5321a",
	            "SandboxKey": "/var/run/docker/netns/2b5ccb14c463",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34891"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34892"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34895"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34893"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34894"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-886951": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:ed:d4:de:f4:4e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3e5f60352a068e220fd2810f4516a5014f16c78647f632d14d145d4ec80d9b4f",
	                    "EndpointID": "7146692b9ff75e4a9f15b31744ba1759a84edb39bcae46b34e67a07b0baf1565",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-886951",
	                        "53265fd5269c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-886951 -n no-preload-886951
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-886951 -n no-preload-886951: exit status 2 (338.474254ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-886951 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-886951 logs -n 25: (1.373660705s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cert-options-783705 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-783705    │ jenkins │ v1.37.0 │ 18 Oct 25 09:28 UTC │ 18 Oct 25 09:28 UTC │
	│ delete  │ -p cert-options-783705                                                                                                                                                                                                                        │ cert-options-783705    │ jenkins │ v1.37.0 │ 18 Oct 25 09:28 UTC │ 18 Oct 25 09:28 UTC │
	│ start   │ -p old-k8s-version-136598 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-136598 │ jenkins │ v1.37.0 │ 18 Oct 25 09:28 UTC │ 18 Oct 25 09:29 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-136598 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-136598 │ jenkins │ v1.37.0 │ 18 Oct 25 09:29 UTC │                     │
	│ stop    │ -p old-k8s-version-136598 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-136598 │ jenkins │ v1.37.0 │ 18 Oct 25 09:29 UTC │ 18 Oct 25 09:29 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-136598 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-136598 │ jenkins │ v1.37.0 │ 18 Oct 25 09:29 UTC │ 18 Oct 25 09:29 UTC │
	│ start   │ -p old-k8s-version-136598 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-136598 │ jenkins │ v1.37.0 │ 18 Oct 25 09:29 UTC │ 18 Oct 25 09:30 UTC │
	│ start   │ -p cert-expiration-854768 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-854768 │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │ 18 Oct 25 09:30 UTC │
	│ delete  │ -p cert-expiration-854768                                                                                                                                                                                                                     │ cert-expiration-854768 │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │ 18 Oct 25 09:30 UTC │
	│ image   │ old-k8s-version-136598 image list --format=json                                                                                                                                                                                               │ old-k8s-version-136598 │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │ 18 Oct 25 09:30 UTC │
	│ pause   │ -p old-k8s-version-136598 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-136598 │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │                     │
	│ start   │ -p no-preload-886951 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-886951      │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │ 18 Oct 25 09:32 UTC │
	│ delete  │ -p old-k8s-version-136598                                                                                                                                                                                                                     │ old-k8s-version-136598 │ jenkins │ v1.37.0 │ 18 Oct 25 09:31 UTC │ 18 Oct 25 09:31 UTC │
	│ delete  │ -p old-k8s-version-136598                                                                                                                                                                                                                     │ old-k8s-version-136598 │ jenkins │ v1.37.0 │ 18 Oct 25 09:31 UTC │ 18 Oct 25 09:31 UTC │
	│ start   │ -p embed-certs-559379 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-559379     │ jenkins │ v1.37.0 │ 18 Oct 25 09:31 UTC │ 18 Oct 25 09:32 UTC │
	│ addons  │ enable metrics-server -p no-preload-886951 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-886951      │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │                     │
	│ stop    │ -p no-preload-886951 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-886951      │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │ 18 Oct 25 09:32 UTC │
	│ addons  │ enable dashboard -p no-preload-886951 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-886951      │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │ 18 Oct 25 09:32 UTC │
	│ start   │ -p no-preload-886951 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-886951      │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │ 18 Oct 25 09:33 UTC │
	│ addons  │ enable metrics-server -p embed-certs-559379 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-559379     │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │                     │
	│ stop    │ -p embed-certs-559379 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-559379     │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │ 18 Oct 25 09:33 UTC │
	│ addons  │ enable dashboard -p embed-certs-559379 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-559379     │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:33 UTC │
	│ start   │ -p embed-certs-559379 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-559379     │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │                     │
	│ image   │ no-preload-886951 image list --format=json                                                                                                                                                                                                    │ no-preload-886951      │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:33 UTC │
	│ pause   │ -p no-preload-886951 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-886951      │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:33:04
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:33:04.244338 1471064 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:33:04.244477 1471064 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:33:04.244489 1471064 out.go:374] Setting ErrFile to fd 2...
	I1018 09:33:04.244495 1471064 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:33:04.244780 1471064 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 09:33:04.245206 1471064 out.go:368] Setting JSON to false
	I1018 09:33:04.246237 1471064 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":40532,"bootTime":1760739453,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1018 09:33:04.246304 1471064 start.go:141] virtualization:  
	I1018 09:33:04.251238 1471064 out.go:179] * [embed-certs-559379] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 09:33:04.254257 1471064 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 09:33:04.254329 1471064 notify.go:220] Checking for updates...
	I1018 09:33:04.259915 1471064 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:33:04.262943 1471064 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 09:33:04.265873 1471064 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-1274243/.minikube
	I1018 09:33:04.268797 1471064 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 09:33:04.271831 1471064 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:33:04.275137 1471064 config.go:182] Loaded profile config "embed-certs-559379": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:33:04.275734 1471064 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:33:04.303075 1471064 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 09:33:04.303185 1471064 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:33:04.360091 1471064 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-18 09:33:04.351114774 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:33:04.360204 1471064 docker.go:318] overlay module found
	I1018 09:33:04.363411 1471064 out.go:179] * Using the docker driver based on existing profile
	I1018 09:33:04.366326 1471064 start.go:305] selected driver: docker
	I1018 09:33:04.366346 1471064 start.go:925] validating driver "docker" against &{Name:embed-certs-559379 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-559379 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:33:04.366443 1471064 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:33:04.367186 1471064 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:33:04.428070 1471064 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-18 09:33:04.418834832 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:33:04.428418 1471064 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:33:04.428447 1471064 cni.go:84] Creating CNI manager for ""
	I1018 09:33:04.428508 1471064 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:33:04.428560 1471064 start.go:349] cluster config:
	{Name:embed-certs-559379 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-559379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:33:04.431876 1471064 out.go:179] * Starting "embed-certs-559379" primary control-plane node in "embed-certs-559379" cluster
	I1018 09:33:04.434770 1471064 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:33:04.438491 1471064 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:33:04.441327 1471064 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:33:04.441442 1471064 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 09:33:04.441457 1471064 cache.go:58] Caching tarball of preloaded images
	I1018 09:33:04.441374 1471064 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:33:04.441544 1471064 preload.go:233] Found /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 09:33:04.441555 1471064 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 09:33:04.441670 1471064 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379/config.json ...
	I1018 09:33:04.460858 1471064 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:33:04.460880 1471064 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:33:04.460897 1471064 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:33:04.460926 1471064 start.go:360] acquireMachinesLock for embed-certs-559379: {Name:mk418755d6e5d94c4c79fcae2f644d56877c0df8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:33:04.461085 1471064 start.go:364] duration metric: took 134.88µs to acquireMachinesLock for "embed-certs-559379"
	I1018 09:33:04.461112 1471064 start.go:96] Skipping create...Using existing machine configuration
	I1018 09:33:04.461124 1471064 fix.go:54] fixHost starting: 
	I1018 09:33:04.461393 1471064 cli_runner.go:164] Run: docker container inspect embed-certs-559379 --format={{.State.Status}}
	I1018 09:33:04.477582 1471064 fix.go:112] recreateIfNeeded on embed-certs-559379: state=Stopped err=<nil>
	W1018 09:33:04.477622 1471064 fix.go:138] unexpected machine state, will restart: <nil>
	W1018 09:33:01.936396 1468145 pod_ready.go:104] pod "coredns-66bc5c9577-l2rmq" is not "Ready", error: <nil>
	W1018 09:33:04.436914 1468145 pod_ready.go:104] pod "coredns-66bc5c9577-l2rmq" is not "Ready", error: <nil>
	I1018 09:33:04.480760 1471064 out.go:252] * Restarting existing docker container for "embed-certs-559379" ...
	I1018 09:33:04.480850 1471064 cli_runner.go:164] Run: docker start embed-certs-559379
	I1018 09:33:04.728928 1471064 cli_runner.go:164] Run: docker container inspect embed-certs-559379 --format={{.State.Status}}
	I1018 09:33:04.749787 1471064 kic.go:430] container "embed-certs-559379" state is running.
	I1018 09:33:04.750145 1471064 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-559379
	I1018 09:33:04.772683 1471064 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379/config.json ...
	I1018 09:33:04.772909 1471064 machine.go:93] provisionDockerMachine start ...
	I1018 09:33:04.772986 1471064 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-559379
	I1018 09:33:04.794344 1471064 main.go:141] libmachine: Using SSH client type: native
	I1018 09:33:04.794898 1471064 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34896 <nil> <nil>}
	I1018 09:33:04.794915 1471064 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:33:04.795607 1471064 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 09:33:07.947582 1471064 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-559379
	
	I1018 09:33:07.947678 1471064 ubuntu.go:182] provisioning hostname "embed-certs-559379"
	I1018 09:33:07.947777 1471064 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-559379
	I1018 09:33:07.965567 1471064 main.go:141] libmachine: Using SSH client type: native
	I1018 09:33:07.965882 1471064 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34896 <nil> <nil>}
	I1018 09:33:07.965899 1471064 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-559379 && echo "embed-certs-559379" | sudo tee /etc/hostname
	I1018 09:33:08.128833 1471064 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-559379
	
	I1018 09:33:08.128936 1471064 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-559379
	I1018 09:33:08.146515 1471064 main.go:141] libmachine: Using SSH client type: native
	I1018 09:33:08.146823 1471064 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34896 <nil> <nil>}
	I1018 09:33:08.146845 1471064 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-559379' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-559379/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-559379' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:33:08.291974 1471064 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:33:08.291999 1471064 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-1274243/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-1274243/.minikube}
	I1018 09:33:08.292032 1471064 ubuntu.go:190] setting up certificates
	I1018 09:33:08.292044 1471064 provision.go:84] configureAuth start
	I1018 09:33:08.292110 1471064 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-559379
	I1018 09:33:08.309908 1471064 provision.go:143] copyHostCerts
	I1018 09:33:08.309975 1471064 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem, removing ...
	I1018 09:33:08.309997 1471064 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem
	I1018 09:33:08.310075 1471064 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem (1078 bytes)
	I1018 09:33:08.310186 1471064 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem, removing ...
	I1018 09:33:08.310197 1471064 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem
	I1018 09:33:08.310226 1471064 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem (1123 bytes)
	I1018 09:33:08.310286 1471064 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem, removing ...
	I1018 09:33:08.310297 1471064 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem
	I1018 09:33:08.310323 1471064 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem (1675 bytes)
	I1018 09:33:08.310377 1471064 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem org=jenkins.embed-certs-559379 san=[127.0.0.1 192.168.76.2 embed-certs-559379 localhost minikube]
	W1018 09:33:06.936736 1468145 pod_ready.go:104] pod "coredns-66bc5c9577-l2rmq" is not "Ready", error: <nil>
	W1018 09:33:08.938593 1468145 pod_ready.go:104] pod "coredns-66bc5c9577-l2rmq" is not "Ready", error: <nil>
	I1018 09:33:09.458705 1471064 provision.go:177] copyRemoteCerts
	I1018 09:33:09.458783 1471064 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:33:09.458832 1471064 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-559379
	I1018 09:33:09.475395 1471064 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34896 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/embed-certs-559379/id_rsa Username:docker}
	I1018 09:33:09.579537 1471064 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:33:09.598201 1471064 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1018 09:33:09.617496 1471064 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:33:09.636861 1471064 provision.go:87] duration metric: took 1.344788573s to configureAuth
	I1018 09:33:09.636943 1471064 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:33:09.637176 1471064 config.go:182] Loaded profile config "embed-certs-559379": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:33:09.637281 1471064 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-559379
	I1018 09:33:09.654247 1471064 main.go:141] libmachine: Using SSH client type: native
	I1018 09:33:09.654558 1471064 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34896 <nil> <nil>}
	I1018 09:33:09.654577 1471064 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:33:09.989773 1471064 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:33:09.989802 1471064 machine.go:96] duration metric: took 5.21688175s to provisionDockerMachine
	I1018 09:33:09.989814 1471064 start.go:293] postStartSetup for "embed-certs-559379" (driver="docker")
	I1018 09:33:09.989850 1471064 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:33:09.989952 1471064 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:33:09.990018 1471064 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-559379
	I1018 09:33:10.016475 1471064 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34896 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/embed-certs-559379/id_rsa Username:docker}
	I1018 09:33:10.128401 1471064 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:33:10.132147 1471064 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:33:10.132181 1471064 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:33:10.132201 1471064 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-1274243/.minikube/addons for local assets ...
	I1018 09:33:10.132290 1471064 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-1274243/.minikube/files for local assets ...
	I1018 09:33:10.132397 1471064 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem -> 12760972.pem in /etc/ssl/certs
	I1018 09:33:10.132529 1471064 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:33:10.140550 1471064 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem --> /etc/ssl/certs/12760972.pem (1708 bytes)
	I1018 09:33:10.158998 1471064 start.go:296] duration metric: took 169.167185ms for postStartSetup
	I1018 09:33:10.159079 1471064 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:33:10.159126 1471064 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-559379
	I1018 09:33:10.176644 1471064 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34896 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/embed-certs-559379/id_rsa Username:docker}
	I1018 09:33:10.276978 1471064 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:33:10.281640 1471064 fix.go:56] duration metric: took 5.820508176s for fixHost
	I1018 09:33:10.281667 1471064 start.go:83] releasing machines lock for "embed-certs-559379", held for 5.820569319s
	I1018 09:33:10.281743 1471064 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-559379
	I1018 09:33:10.299273 1471064 ssh_runner.go:195] Run: cat /version.json
	I1018 09:33:10.299323 1471064 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-559379
	I1018 09:33:10.299330 1471064 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:33:10.299392 1471064 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-559379
	I1018 09:33:10.317724 1471064 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34896 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/embed-certs-559379/id_rsa Username:docker}
	I1018 09:33:10.320150 1471064 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34896 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/embed-certs-559379/id_rsa Username:docker}
	I1018 09:33:10.528558 1471064 ssh_runner.go:195] Run: systemctl --version
	I1018 09:33:10.535100 1471064 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:33:10.570927 1471064 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:33:10.575901 1471064 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:33:10.575982 1471064 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:33:10.583568 1471064 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 09:33:10.583595 1471064 start.go:495] detecting cgroup driver to use...
	I1018 09:33:10.583625 1471064 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 09:33:10.583673 1471064 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:33:10.598998 1471064 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:33:10.612585 1471064 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:33:10.612665 1471064 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:33:10.627884 1471064 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:33:10.641885 1471064 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:33:10.763791 1471064 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:33:10.885339 1471064 docker.go:234] disabling docker service ...
	I1018 09:33:10.885407 1471064 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:33:10.901655 1471064 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:33:10.915118 1471064 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:33:11.046575 1471064 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:33:11.172548 1471064 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:33:11.187439 1471064 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:33:11.201695 1471064 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:33:11.201793 1471064 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:33:11.211537 1471064 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 09:33:11.211607 1471064 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:33:11.221318 1471064 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:33:11.230188 1471064 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:33:11.239137 1471064 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:33:11.247934 1471064 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:33:11.257235 1471064 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:33:11.271292 1471064 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:33:11.280389 1471064 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:33:11.288151 1471064 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:33:11.295468 1471064 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:33:11.415987 1471064 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:33:11.554210 1471064 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:33:11.554294 1471064 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:33:11.557974 1471064 start.go:563] Will wait 60s for crictl version
	I1018 09:33:11.558084 1471064 ssh_runner.go:195] Run: which crictl
	I1018 09:33:11.561602 1471064 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:33:11.589957 1471064 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:33:11.590115 1471064 ssh_runner.go:195] Run: crio --version
	I1018 09:33:11.619324 1471064 ssh_runner.go:195] Run: crio --version
	I1018 09:33:11.651781 1471064 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:33:11.654563 1471064 cli_runner.go:164] Run: docker network inspect embed-certs-559379 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:33:11.670565 1471064 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 09:33:11.674415 1471064 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:33:11.684407 1471064 kubeadm.go:883] updating cluster {Name:embed-certs-559379 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-559379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:33:11.684523 1471064 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:33:11.684578 1471064 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:33:11.721230 1471064 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:33:11.721254 1471064 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:33:11.721309 1471064 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:33:11.747273 1471064 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:33:11.747297 1471064 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:33:11.747305 1471064 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1018 09:33:11.747401 1471064 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-559379 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-559379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:33:11.747485 1471064 ssh_runner.go:195] Run: crio config
	I1018 09:33:11.806970 1471064 cni.go:84] Creating CNI manager for ""
	I1018 09:33:11.806993 1471064 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:33:11.807012 1471064 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:33:11.807034 1471064 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-559379 NodeName:embed-certs-559379 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:33:11.807166 1471064 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-559379"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:33:11.807241 1471064 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:33:11.815112 1471064 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:33:11.815179 1471064 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:33:11.822498 1471064 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1018 09:33:11.834982 1471064 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:33:11.847811 1471064 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1018 09:33:11.861785 1471064 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:33:11.865531 1471064 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:33:11.874939 1471064 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:33:11.997005 1471064 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:33:12.015870 1471064 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379 for IP: 192.168.76.2
	I1018 09:33:12.015893 1471064 certs.go:195] generating shared ca certs ...
	I1018 09:33:12.015926 1471064 certs.go:227] acquiring lock for ca certs: {Name:mk38b7543cc5a8f0209b20a590824c94ad8d0609 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:33:12.016082 1471064 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.key
	I1018 09:33:12.016129 1471064 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.key
	I1018 09:33:12.016145 1471064 certs.go:257] generating profile certs ...
	I1018 09:33:12.016237 1471064 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379/client.key
	I1018 09:33:12.016292 1471064 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379/apiserver.key.9dbb2352
	I1018 09:33:12.016335 1471064 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379/proxy-client.key
	I1018 09:33:12.016456 1471064 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097.pem (1338 bytes)
	W1018 09:33:12.016491 1471064 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097_empty.pem, impossibly tiny 0 bytes
	I1018 09:33:12.016505 1471064 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:33:12.016528 1471064 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:33:12.016554 1471064 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:33:12.016581 1471064 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem (1675 bytes)
	I1018 09:33:12.016631 1471064 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem (1708 bytes)
	I1018 09:33:12.017256 1471064 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:33:12.042042 1471064 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 09:33:12.062375 1471064 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:33:12.082194 1471064 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:33:12.114465 1471064 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1018 09:33:12.136168 1471064 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 09:33:12.158249 1471064 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:33:12.194668 1471064 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 09:33:12.214751 1471064 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:33:12.235039 1471064 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097.pem --> /usr/share/ca-certificates/1276097.pem (1338 bytes)
	I1018 09:33:12.252778 1471064 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem --> /usr/share/ca-certificates/12760972.pem (1708 bytes)
	I1018 09:33:12.270510 1471064 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:33:12.283890 1471064 ssh_runner.go:195] Run: openssl version
	I1018 09:33:12.290440 1471064 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:33:12.299311 1471064 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:33:12.303278 1471064 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:33:12.303341 1471064 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:33:12.347337 1471064 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:33:12.355029 1471064 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1276097.pem && ln -fs /usr/share/ca-certificates/1276097.pem /etc/ssl/certs/1276097.pem"
	I1018 09:33:12.363750 1471064 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1276097.pem
	I1018 09:33:12.368471 1471064 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 08:36 /usr/share/ca-certificates/1276097.pem
	I1018 09:33:12.368566 1471064 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1276097.pem
	I1018 09:33:12.410009 1471064 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1276097.pem /etc/ssl/certs/51391683.0"
	I1018 09:33:12.417993 1471064 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12760972.pem && ln -fs /usr/share/ca-certificates/12760972.pem /etc/ssl/certs/12760972.pem"
	I1018 09:33:12.426306 1471064 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12760972.pem
	I1018 09:33:12.430543 1471064 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 08:36 /usr/share/ca-certificates/12760972.pem
	I1018 09:33:12.430617 1471064 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12760972.pem
	I1018 09:33:12.472438 1471064 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12760972.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:33:12.480183 1471064 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:33:12.483718 1471064 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 09:33:12.526096 1471064 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 09:33:12.572260 1471064 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 09:33:12.614442 1471064 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 09:33:12.677036 1471064 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 09:33:12.747948 1471064 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 09:33:12.815768 1471064 kubeadm.go:400] StartCluster: {Name:embed-certs-559379 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-559379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:33:12.815986 1471064 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:33:12.816085 1471064 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:33:12.883757 1471064 cri.go:89] found id: "107f423c474214a77b70bea579d8693f96941573e52099aab36ca04cad80b9fb"
	I1018 09:33:12.883834 1471064 cri.go:89] found id: "9e4bebc346e34245095acfdc99e4bf27d586ba1008354824cc3842710f552d3d"
	I1018 09:33:12.883868 1471064 cri.go:89] found id: "1fa42435e829fa1ff7a0af9be9dc7035e7cc16ae52106466d057fafcbaf6e9bb"
	I1018 09:33:12.883897 1471064 cri.go:89] found id: "836750ba877589f7642d95bcc7eaea0db209e4198f52173d3d62e2a5392defad"
	I1018 09:33:12.883916 1471064 cri.go:89] found id: ""
	I1018 09:33:12.883987 1471064 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 09:33:12.907215 1471064 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:33:12Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:33:12.907338 1471064 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:33:12.922690 1471064 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 09:33:12.922750 1471064 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 09:33:12.922825 1471064 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 09:33:12.947230 1471064 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:33:12.947964 1471064 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-559379" does not appear in /home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 09:33:12.948293 1471064 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-1274243/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-559379" cluster setting kubeconfig missing "embed-certs-559379" context setting]
	I1018 09:33:12.948795 1471064 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/kubeconfig: {Name:mk4be33efdd3c492a070116d2c0b6f8b234870aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:33:12.950418 1471064 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 09:33:12.962052 1471064 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1018 09:33:12.962086 1471064 kubeadm.go:601] duration metric: took 39.307459ms to restartPrimaryControlPlane
	I1018 09:33:12.962126 1471064 kubeadm.go:402] duration metric: took 146.368172ms to StartCluster
	I1018 09:33:12.962142 1471064 settings.go:142] acquiring lock: {Name:mk4240955247baece9b9f2d9a5a8e189cb089184 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:33:12.962232 1471064 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 09:33:12.963511 1471064 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/kubeconfig: {Name:mk4be33efdd3c492a070116d2c0b6f8b234870aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:33:12.963791 1471064 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:33:12.964359 1471064 config.go:182] Loaded profile config "embed-certs-559379": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:33:12.964401 1471064 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:33:12.964460 1471064 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-559379"
	I1018 09:33:12.964478 1471064 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-559379"
	W1018 09:33:12.964484 1471064 addons.go:247] addon storage-provisioner should already be in state true
	I1018 09:33:12.964505 1471064 host.go:66] Checking if "embed-certs-559379" exists ...
	I1018 09:33:12.964984 1471064 cli_runner.go:164] Run: docker container inspect embed-certs-559379 --format={{.State.Status}}
	I1018 09:33:12.965158 1471064 addons.go:69] Setting dashboard=true in profile "embed-certs-559379"
	I1018 09:33:12.965187 1471064 addons.go:238] Setting addon dashboard=true in "embed-certs-559379"
	W1018 09:33:12.965209 1471064 addons.go:247] addon dashboard should already be in state true
	I1018 09:33:12.965254 1471064 host.go:66] Checking if "embed-certs-559379" exists ...
	I1018 09:33:12.965687 1471064 cli_runner.go:164] Run: docker container inspect embed-certs-559379 --format={{.State.Status}}
	I1018 09:33:12.968377 1471064 addons.go:69] Setting default-storageclass=true in profile "embed-certs-559379"
	I1018 09:33:12.968412 1471064 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-559379"
	I1018 09:33:12.974509 1471064 out.go:179] * Verifying Kubernetes components...
	I1018 09:33:12.974905 1471064 cli_runner.go:164] Run: docker container inspect embed-certs-559379 --format={{.State.Status}}
	I1018 09:33:12.981796 1471064 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:33:13.017924 1471064 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 09:33:13.021842 1471064 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 09:33:13.024987 1471064 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 09:33:13.025017 1471064 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 09:33:13.025103 1471064 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-559379
	I1018 09:33:13.036943 1471064 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:33:13.043432 1471064 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:33:13.043467 1471064 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:33:13.043545 1471064 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-559379
	I1018 09:33:13.062354 1471064 addons.go:238] Setting addon default-storageclass=true in "embed-certs-559379"
	W1018 09:33:13.062385 1471064 addons.go:247] addon default-storageclass should already be in state true
	I1018 09:33:13.062409 1471064 host.go:66] Checking if "embed-certs-559379" exists ...
	I1018 09:33:13.062821 1471064 cli_runner.go:164] Run: docker container inspect embed-certs-559379 --format={{.State.Status}}
	I1018 09:33:13.096021 1471064 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34896 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/embed-certs-559379/id_rsa Username:docker}
	I1018 09:33:13.108777 1471064 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34896 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/embed-certs-559379/id_rsa Username:docker}
	I1018 09:33:13.127666 1471064 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:33:13.127699 1471064 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:33:13.127761 1471064 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-559379
	I1018 09:33:13.157434 1471064 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34896 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/embed-certs-559379/id_rsa Username:docker}
	I1018 09:33:13.384004 1471064 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:33:13.391431 1471064 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 09:33:13.391457 1471064 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 09:33:13.419346 1471064 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:33:13.423901 1471064 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:33:13.447513 1471064 node_ready.go:35] waiting up to 6m0s for node "embed-certs-559379" to be "Ready" ...
	I1018 09:33:13.449087 1471064 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 09:33:13.449113 1471064 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 09:33:13.544223 1471064 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 09:33:13.544249 1471064 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 09:33:13.609966 1471064 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 09:33:13.609997 1471064 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 09:33:13.666426 1471064 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 09:33:13.666452 1471064 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 09:33:13.698459 1471064 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 09:33:13.698486 1471064 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 09:33:13.715891 1471064 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 09:33:13.715913 1471064 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 09:33:13.728554 1471064 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 09:33:13.728579 1471064 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 09:33:13.742636 1471064 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:33:13.742662 1471064 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 09:33:13.757270 1471064 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1018 09:33:10.941428 1468145 pod_ready.go:104] pod "coredns-66bc5c9577-l2rmq" is not "Ready", error: <nil>
	W1018 09:33:13.437208 1468145 pod_ready.go:104] pod "coredns-66bc5c9577-l2rmq" is not "Ready", error: <nil>
	I1018 09:33:17.724427 1471064 node_ready.go:49] node "embed-certs-559379" is "Ready"
	I1018 09:33:17.724468 1471064 node_ready.go:38] duration metric: took 4.276905534s for node "embed-certs-559379" to be "Ready" ...
	I1018 09:33:17.724492 1471064 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:33:17.724575 1471064 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:33:19.566885 1471064 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.147503177s)
	I1018 09:33:19.566930 1471064 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.14300375s)
	I1018 09:33:19.650074 1471064 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.892758176s)
	I1018 09:33:19.650274 1471064 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.925682992s)
	I1018 09:33:19.650310 1471064 api_server.go:72] duration metric: took 6.686487227s to wait for apiserver process to appear ...
	I1018 09:33:19.650330 1471064 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:33:19.650360 1471064 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:33:19.653217 1471064 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-559379 addons enable metrics-server
	
	I1018 09:33:19.656090 1471064 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	W1018 09:33:15.437884 1468145 pod_ready.go:104] pod "coredns-66bc5c9577-l2rmq" is not "Ready", error: <nil>
	W1018 09:33:17.440550 1468145 pod_ready.go:104] pod "coredns-66bc5c9577-l2rmq" is not "Ready", error: <nil>
	W1018 09:33:19.937860 1468145 pod_ready.go:104] pod "coredns-66bc5c9577-l2rmq" is not "Ready", error: <nil>
	I1018 09:33:19.659040 1471064 addons.go:514] duration metric: took 6.694618652s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1018 09:33:19.663946 1471064 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 09:33:19.667954 1471064 api_server.go:141] control plane version: v1.34.1
	I1018 09:33:19.667975 1471064 api_server.go:131] duration metric: took 17.626922ms to wait for apiserver health ...
	I1018 09:33:19.667984 1471064 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:33:19.685908 1471064 system_pods.go:59] 8 kube-system pods found
	I1018 09:33:19.685981 1471064 system_pods.go:61] "coredns-66bc5c9577-t9blq" [07dead7a-c196-4355-8e63-d7dbe47b07cc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:33:19.686007 1471064 system_pods.go:61] "etcd-embed-certs-559379" [473c810d-3278-481b-ad96-7f200a82f830] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:33:19.686029 1471064 system_pods.go:61] "kindnet-6ltrq" [ca80e038-38ba-42a6-8275-fcc38916c7ca] Running
	I1018 09:33:19.686075 1471064 system_pods.go:61] "kube-apiserver-embed-certs-559379" [ed153ff3-f3bf-44ba-ad22-b935d59b6c38] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:33:19.686096 1471064 system_pods.go:61] "kube-controller-manager-embed-certs-559379" [dadcca5c-657c-42e4-865c-cc21d7af7fbc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:33:19.686118 1471064 system_pods.go:61] "kube-proxy-82pzn" [4d204191-f23a-4031-a37d-a4c1ec529e4c] Running
	I1018 09:33:19.686149 1471064 system_pods.go:61] "kube-scheduler-embed-certs-559379" [0bc4c8ce-35cf-41a6-a6fa-a1834adb12a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:33:19.686169 1471064 system_pods.go:61] "storage-provisioner" [0e85b72f-adef-4429-bf1f-1f003538e5bb] Running
	I1018 09:33:19.686189 1471064 system_pods.go:74] duration metric: took 18.198558ms to wait for pod list to return data ...
	I1018 09:33:19.686209 1471064 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:33:19.705360 1471064 default_sa.go:45] found service account: "default"
	I1018 09:33:19.705425 1471064 default_sa.go:55] duration metric: took 19.195305ms for default service account to be created ...
	I1018 09:33:19.705449 1471064 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 09:33:19.714748 1471064 system_pods.go:86] 8 kube-system pods found
	I1018 09:33:19.714825 1471064 system_pods.go:89] "coredns-66bc5c9577-t9blq" [07dead7a-c196-4355-8e63-d7dbe47b07cc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:33:19.714850 1471064 system_pods.go:89] "etcd-embed-certs-559379" [473c810d-3278-481b-ad96-7f200a82f830] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:33:19.714871 1471064 system_pods.go:89] "kindnet-6ltrq" [ca80e038-38ba-42a6-8275-fcc38916c7ca] Running
	I1018 09:33:19.714908 1471064 system_pods.go:89] "kube-apiserver-embed-certs-559379" [ed153ff3-f3bf-44ba-ad22-b935d59b6c38] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:33:19.714937 1471064 system_pods.go:89] "kube-controller-manager-embed-certs-559379" [dadcca5c-657c-42e4-865c-cc21d7af7fbc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:33:19.714958 1471064 system_pods.go:89] "kube-proxy-82pzn" [4d204191-f23a-4031-a37d-a4c1ec529e4c] Running
	I1018 09:33:19.714989 1471064 system_pods.go:89] "kube-scheduler-embed-certs-559379" [0bc4c8ce-35cf-41a6-a6fa-a1834adb12a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:33:19.715022 1471064 system_pods.go:89] "storage-provisioner" [0e85b72f-adef-4429-bf1f-1f003538e5bb] Running
	I1018 09:33:19.715071 1471064 system_pods.go:126] duration metric: took 9.58082ms to wait for k8s-apps to be running ...
	I1018 09:33:19.715094 1471064 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 09:33:19.715175 1471064 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:33:19.736456 1471064 system_svc.go:56] duration metric: took 21.354736ms WaitForService to wait for kubelet
	I1018 09:33:19.736530 1471064 kubeadm.go:586] duration metric: took 6.772699487s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:33:19.736563 1471064 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:33:19.744765 1471064 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 09:33:19.744835 1471064 node_conditions.go:123] node cpu capacity is 2
	I1018 09:33:19.744863 1471064 node_conditions.go:105] duration metric: took 8.278563ms to run NodePressure ...
	I1018 09:33:19.744886 1471064 start.go:241] waiting for startup goroutines ...
	I1018 09:33:19.744919 1471064 start.go:246] waiting for cluster config update ...
	I1018 09:33:19.744957 1471064 start.go:255] writing updated cluster config ...
	I1018 09:33:19.745274 1471064 ssh_runner.go:195] Run: rm -f paused
	I1018 09:33:19.749849 1471064 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:33:19.755161 1471064 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-t9blq" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 09:33:21.762955 1471064 pod_ready.go:104] pod "coredns-66bc5c9577-t9blq" is not "Ready", error: <nil>
	W1018 09:33:21.946576 1468145 pod_ready.go:104] pod "coredns-66bc5c9577-l2rmq" is not "Ready", error: <nil>
	W1018 09:33:24.436386 1468145 pod_ready.go:104] pod "coredns-66bc5c9577-l2rmq" is not "Ready", error: <nil>
	I1018 09:33:25.437104 1468145 pod_ready.go:94] pod "coredns-66bc5c9577-l2rmq" is "Ready"
	I1018 09:33:25.437133 1468145 pod_ready.go:86] duration metric: took 35.006416252s for pod "coredns-66bc5c9577-l2rmq" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:25.440650 1468145 pod_ready.go:83] waiting for pod "etcd-no-preload-886951" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:25.444817 1468145 pod_ready.go:94] pod "etcd-no-preload-886951" is "Ready"
	I1018 09:33:25.444850 1468145 pod_ready.go:86] duration metric: took 4.16964ms for pod "etcd-no-preload-886951" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:25.448454 1468145 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-886951" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:25.453561 1468145 pod_ready.go:94] pod "kube-apiserver-no-preload-886951" is "Ready"
	I1018 09:33:25.453593 1468145 pod_ready.go:86] duration metric: took 5.104145ms for pod "kube-apiserver-no-preload-886951" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:25.456659 1468145 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-886951" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:25.634401 1468145 pod_ready.go:94] pod "kube-controller-manager-no-preload-886951" is "Ready"
	I1018 09:33:25.634426 1468145 pod_ready.go:86] duration metric: took 177.739016ms for pod "kube-controller-manager-no-preload-886951" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:25.834448 1468145 pod_ready.go:83] waiting for pod "kube-proxy-4gbs9" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:26.235556 1468145 pod_ready.go:94] pod "kube-proxy-4gbs9" is "Ready"
	I1018 09:33:26.235624 1468145 pod_ready.go:86] duration metric: took 401.148143ms for pod "kube-proxy-4gbs9" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:26.434557 1468145 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-886951" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:26.834635 1468145 pod_ready.go:94] pod "kube-scheduler-no-preload-886951" is "Ready"
	I1018 09:33:26.834715 1468145 pod_ready.go:86] duration metric: took 400.087856ms for pod "kube-scheduler-no-preload-886951" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:26.834741 1468145 pod_ready.go:40] duration metric: took 36.408287538s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:33:26.904340 1468145 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 09:33:26.912343 1468145 out.go:179] * Done! kubectl is now configured to use "no-preload-886951" cluster and "default" namespace by default
	W1018 09:33:24.260745 1471064 pod_ready.go:104] pod "coredns-66bc5c9577-t9blq" is not "Ready", error: <nil>
	W1018 09:33:26.260871 1471064 pod_ready.go:104] pod "coredns-66bc5c9577-t9blq" is not "Ready", error: <nil>
	W1018 09:33:28.763832 1471064 pod_ready.go:104] pod "coredns-66bc5c9577-t9blq" is not "Ready", error: <nil>
	W1018 09:33:31.261529 1471064 pod_ready.go:104] pod "coredns-66bc5c9577-t9blq" is not "Ready", error: <nil>
	W1018 09:33:33.761788 1471064 pod_ready.go:104] pod "coredns-66bc5c9577-t9blq" is not "Ready", error: <nil>
	W1018 09:33:36.260997 1471064 pod_ready.go:104] pod "coredns-66bc5c9577-t9blq" is not "Ready", error: <nil>
	W1018 09:33:38.761018 1471064 pod_ready.go:104] pod "coredns-66bc5c9577-t9blq" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 18 09:33:19 no-preload-886951 crio[652]: time="2025-10-18T09:33:19.795361005Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=862f3da6-af6d-4d96-a638-cefd5966ac42 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:33:19 no-preload-886951 crio[652]: time="2025-10-18T09:33:19.796256118Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=7326b57b-bcfb-41dd-9556-3db71ff5b6a0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:33:19 no-preload-886951 crio[652]: time="2025-10-18T09:33:19.796484362Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:33:19 no-preload-886951 crio[652]: time="2025-10-18T09:33:19.808488409Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:33:19 no-preload-886951 crio[652]: time="2025-10-18T09:33:19.808674201Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/02b006731b4f7ca7b03f2ecdc902b1648131b4fc26a0890e020520eb0fa92338/merged/etc/passwd: no such file or directory"
	Oct 18 09:33:19 no-preload-886951 crio[652]: time="2025-10-18T09:33:19.808696756Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/02b006731b4f7ca7b03f2ecdc902b1648131b4fc26a0890e020520eb0fa92338/merged/etc/group: no such file or directory"
	Oct 18 09:33:19 no-preload-886951 crio[652]: time="2025-10-18T09:33:19.808978455Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:33:19 no-preload-886951 crio[652]: time="2025-10-18T09:33:19.842543666Z" level=info msg="Created container 4c32c80c458b3f70a26b5257c09d568bd64ccf7a157833322483f6a31d1a7512: kube-system/storage-provisioner/storage-provisioner" id=7326b57b-bcfb-41dd-9556-3db71ff5b6a0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:33:19 no-preload-886951 crio[652]: time="2025-10-18T09:33:19.844637121Z" level=info msg="Starting container: 4c32c80c458b3f70a26b5257c09d568bd64ccf7a157833322483f6a31d1a7512" id=2f969cd5-d62e-41f4-bf36-2eda7a0025a9 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:33:19 no-preload-886951 crio[652]: time="2025-10-18T09:33:19.863576483Z" level=info msg="Started container" PID=1636 containerID=4c32c80c458b3f70a26b5257c09d568bd64ccf7a157833322483f6a31d1a7512 description=kube-system/storage-provisioner/storage-provisioner id=2f969cd5-d62e-41f4-bf36-2eda7a0025a9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f03660d7f76a38c7abe15c94b1408fe200bc53b431bd70a29f5d34fb8dd778ee
	Oct 18 09:33:29 no-preload-886951 crio[652]: time="2025-10-18T09:33:29.629153828Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:33:29 no-preload-886951 crio[652]: time="2025-10-18T09:33:29.633227733Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:33:29 no-preload-886951 crio[652]: time="2025-10-18T09:33:29.633376701Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:33:29 no-preload-886951 crio[652]: time="2025-10-18T09:33:29.633447583Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:33:29 no-preload-886951 crio[652]: time="2025-10-18T09:33:29.636808031Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:33:29 no-preload-886951 crio[652]: time="2025-10-18T09:33:29.636967288Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:33:29 no-preload-886951 crio[652]: time="2025-10-18T09:33:29.637039672Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:33:29 no-preload-886951 crio[652]: time="2025-10-18T09:33:29.640708919Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:33:29 no-preload-886951 crio[652]: time="2025-10-18T09:33:29.640851734Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:33:29 no-preload-886951 crio[652]: time="2025-10-18T09:33:29.640927768Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:33:29 no-preload-886951 crio[652]: time="2025-10-18T09:33:29.644343255Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:33:29 no-preload-886951 crio[652]: time="2025-10-18T09:33:29.644483961Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:33:29 no-preload-886951 crio[652]: time="2025-10-18T09:33:29.644560332Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:33:29 no-preload-886951 crio[652]: time="2025-10-18T09:33:29.648341748Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:33:29 no-preload-886951 crio[652]: time="2025-10-18T09:33:29.648479525Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	4c32c80c458b3       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           21 seconds ago      Running             storage-provisioner         2                   f03660d7f76a3       storage-provisioner                          kube-system
	6fe9487b048d3       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago      Exited              dashboard-metrics-scraper   2                   0cb015f6ee55a       dashboard-metrics-scraper-6ffb444bf9-p4dqv   kubernetes-dashboard
	3c99758e0b671       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   39 seconds ago      Running             kubernetes-dashboard        0                   41aee173dc755       kubernetes-dashboard-855c9754f9-smc6z        kubernetes-dashboard
	f505975793d59       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           52 seconds ago      Running             coredns                     1                   21822c335a3a2       coredns-66bc5c9577-l2rmq                     kube-system
	553b5a889b243       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           52 seconds ago      Running             busybox                     1                   361ac27ca2eb1       busybox                                      default
	306e8654ef730       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           52 seconds ago      Running             kube-proxy                  1                   9d211d1e0d8dc       kube-proxy-4gbs9                             kube-system
	94b1e528e52b0       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           52 seconds ago      Running             kindnet-cni                 1                   53f22fbcbd790       kindnet-l4xmh                                kube-system
	c1c8eeb295536       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           52 seconds ago      Exited              storage-provisioner         1                   f03660d7f76a3       storage-provisioner                          kube-system
	52a4f82d25803       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           58 seconds ago      Running             kube-apiserver              1                   68e4f0d4a62c6       kube-apiserver-no-preload-886951             kube-system
	0b366ec41824a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           58 seconds ago      Running             kube-controller-manager     1                   ebc5dd91e8d69       kube-controller-manager-no-preload-886951    kube-system
	0cc3656fad24e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           58 seconds ago      Running             kube-scheduler              1                   eb4e23e4eb03b       kube-scheduler-no-preload-886951             kube-system
	8333b66cfc8ee       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           58 seconds ago      Running             etcd                        1                   cf4650c5a2ebf       etcd-no-preload-886951                       kube-system
	
	
	==> coredns [f505975793d59951a10cdd493040db58400af6b79be4d56832adb20fa9f0f241] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39737 - 65000 "HINFO IN 8027062649216342014.8987925391557269663. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019779757s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-886951
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-886951
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820
	                    minikube.k8s.io/name=no-preload-886951
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_31_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:31:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-886951
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:33:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:33:18 +0000   Sat, 18 Oct 2025 09:31:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:33:18 +0000   Sat, 18 Oct 2025 09:31:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:33:18 +0000   Sat, 18 Oct 2025 09:31:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:33:18 +0000   Sat, 18 Oct 2025 09:32:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-886951
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                637092e3-28b4-4cc7-8dae-a07e30854491
	  Boot ID:                    65523ab2-bf15-4ba4-9086-da57024e96a9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-66bc5c9577-l2rmq                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     109s
	  kube-system                 etcd-no-preload-886951                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         113s
	  kube-system                 kindnet-l4xmh                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-no-preload-886951              250m (12%)    0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-no-preload-886951     200m (10%)    0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-4gbs9                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-no-preload-886951              100m (5%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-p4dqv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-smc6z         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 107s                 kube-proxy       
	  Normal   Starting                 51s                  kube-proxy       
	  Warning  CgroupV1                 2m5s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m5s (x8 over 2m5s)  kubelet          Node no-preload-886951 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m5s (x8 over 2m5s)  kubelet          Node no-preload-886951 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m5s (x8 over 2m5s)  kubelet          Node no-preload-886951 status is now: NodeHasSufficientPID
	  Normal   Starting                 114s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 114s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  113s                 kubelet          Node no-preload-886951 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    113s                 kubelet          Node no-preload-886951 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     113s                 kubelet          Node no-preload-886951 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           110s                 node-controller  Node no-preload-886951 event: Registered Node no-preload-886951 in Controller
	  Normal   NodeReady                93s                  kubelet          Node no-preload-886951 status is now: NodeReady
	  Normal   Starting                 59s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 59s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  59s (x8 over 59s)    kubelet          Node no-preload-886951 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s (x8 over 59s)    kubelet          Node no-preload-886951 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s (x8 over 59s)    kubelet          Node no-preload-886951 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           49s                  node-controller  Node no-preload-886951 event: Registered Node no-preload-886951 in Controller
	
	
	==> dmesg <==
	[Oct18 09:11] overlayfs: idmapped layers are currently not supported
	[Oct18 09:12] overlayfs: idmapped layers are currently not supported
	[Oct18 09:13] overlayfs: idmapped layers are currently not supported
	[  +9.741593] overlayfs: idmapped layers are currently not supported
	[Oct18 09:14] overlayfs: idmapped layers are currently not supported
	[ +29.155522] overlayfs: idmapped layers are currently not supported
	[Oct18 09:15] overlayfs: idmapped layers are currently not supported
	[ +11.661984] kauditd_printk_skb: 8 callbacks suppressed
	[ +12.494912] overlayfs: idmapped layers are currently not supported
	[Oct18 09:16] overlayfs: idmapped layers are currently not supported
	[Oct18 09:18] overlayfs: idmapped layers are currently not supported
	[Oct18 09:20] overlayfs: idmapped layers are currently not supported
	[  +1.396393] overlayfs: idmapped layers are currently not supported
	[Oct18 09:22] overlayfs: idmapped layers are currently not supported
	[ +27.782946] overlayfs: idmapped layers are currently not supported
	[Oct18 09:25] overlayfs: idmapped layers are currently not supported
	[Oct18 09:26] overlayfs: idmapped layers are currently not supported
	[Oct18 09:27] overlayfs: idmapped layers are currently not supported
	[Oct18 09:28] overlayfs: idmapped layers are currently not supported
	[ +34.309418] overlayfs: idmapped layers are currently not supported
	[Oct18 09:30] overlayfs: idmapped layers are currently not supported
	[Oct18 09:31] overlayfs: idmapped layers are currently not supported
	[ +30.479187] overlayfs: idmapped layers are currently not supported
	[Oct18 09:32] overlayfs: idmapped layers are currently not supported
	[Oct18 09:33] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [8333b66cfc8ee31208ccfc044b7c62e87b44c35fc9b5f0567f504bfb9f50c42b] <==
	{"level":"warn","ts":"2025-10-18T09:32:46.606596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:32:46.627009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:32:46.651620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:32:46.680191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:32:46.697378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:32:46.710960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:32:46.727149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:32:46.747715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:32:46.764095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:32:46.801458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:32:46.814103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:32:46.836543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:32:46.855642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:32:46.868189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:32:46.891721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:32:46.905611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:32:46.926343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:32:46.941307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:32:46.965136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:32:46.988248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:32:47.006836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:32:47.027066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:32:47.043626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:32:47.061886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:32:47.161118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60354","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:33:41 up 11:16,  0 user,  load average: 2.84, 3.23, 2.68
	Linux no-preload-886951 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [94b1e528e52b0ea9b1d5837d3abfb4568b7db6f71a5853e75fe390f34c4c6734] <==
	I1018 09:32:49.439524       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:32:49.439754       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1018 09:32:49.439893       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:32:49.439905       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:32:49.439919       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:32:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:32:49.626872       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:32:49.626962       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:32:49.626998       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:32:49.627835       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 09:33:19.627152       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 09:33:19.627368       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 09:33:19.628625       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 09:33:19.628806       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1018 09:33:21.228125       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:33:21.228255       1 metrics.go:72] Registering metrics
	I1018 09:33:21.228363       1 controller.go:711] "Syncing nftables rules"
	I1018 09:33:29.628757       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 09:33:29.628897       1 main.go:301] handling current node
	I1018 09:33:39.635007       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 09:33:39.635043       1 main.go:301] handling current node
	
	
	==> kube-apiserver [52a4f82d25803437e2b4f9a5a0979d2eddfe52226bc7144054185dd64cbed59e] <==
	I1018 09:32:47.924740       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1018 09:32:47.924764       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 09:32:47.924863       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 09:32:47.925259       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:32:47.925475       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 09:32:47.930842       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 09:32:47.931300       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1018 09:32:47.931335       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 09:32:47.948501       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1018 09:32:47.954032       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 09:32:47.960712       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:32:47.965784       1 cache.go:39] Caches are synced for autoregister controller
	I1018 09:32:48.008237       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 09:32:48.662906       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 09:32:48.749451       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:32:48.764103       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 09:32:48.969966       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 09:32:49.307530       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:32:49.425171       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:32:49.802071       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.17.90"}
	I1018 09:32:49.850115       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.54.51"}
	I1018 09:32:52.699389       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 09:32:52.802538       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 09:32:52.899416       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 09:32:52.899540       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [0b366ec41824a247d56af9aadad985448d0c26d9381e2243c07a327589c034da] <==
	I1018 09:32:52.311914       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 09:32:52.313020       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 09:32:52.313168       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 09:32:52.313782       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:32:52.314782       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 09:32:52.315736       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:32:52.316219       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 09:32:52.316566       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 09:32:52.322108       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 09:32:52.322883       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 09:32:52.326927       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 09:32:52.335694       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1018 09:32:52.338982       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 09:32:52.343758       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 09:32:52.343767       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 09:32:52.343783       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 09:32:52.349011       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 09:32:52.349118       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:32:52.350503       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 09:32:52.350598       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 09:32:52.350715       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 09:32:52.350757       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 09:32:52.350788       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 09:32:52.352714       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 09:32:52.356017       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	
	
	==> kube-proxy [306e8654ef730e9d2a05a0f8eb73d98594c93f8ae0707c0c01c6dafb942fbf13] <==
	I1018 09:32:49.659654       1 server_linux.go:53] "Using iptables proxy"
	I1018 09:32:49.857770       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 09:32:49.971776       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:32:49.971916       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1018 09:32:49.972038       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:32:50.071693       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:32:50.071767       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:32:50.078874       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:32:50.079231       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:32:50.079248       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:32:50.081094       1 config.go:200] "Starting service config controller"
	I1018 09:32:50.081121       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:32:50.081141       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:32:50.081145       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:32:50.081156       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:32:50.081160       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:32:50.082014       1 config.go:309] "Starting node config controller"
	I1018 09:32:50.082037       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:32:50.082052       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 09:32:50.183373       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 09:32:50.183782       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 09:32:50.183820       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0cc3656fad24ee9a111ade774682a71330029b5e0750b4e80a331f7222647630] <==
	I1018 09:32:45.870557       1 serving.go:386] Generated self-signed cert in-memory
	W1018 09:32:47.800058       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 09:32:47.800093       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 09:32:47.800102       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 09:32:47.800109       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 09:32:47.957582       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 09:32:47.957619       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:32:47.960299       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 09:32:47.960450       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:32:47.960470       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:32:47.960503       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 09:32:48.061823       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:32:52 no-preload-886951 kubelet[770]: I1018 09:32:52.960204     770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f7c54360-20fd-4379-99d3-99b644351635-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-p4dqv\" (UID: \"f7c54360-20fd-4379-99d3-99b644351635\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p4dqv"
	Oct 18 09:32:53 no-preload-886951 kubelet[770]: W1018 09:32:53.140882     770 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/53265fd5269c62ddd549523e48a4341815211a2547f2f5f74191b59029ce1244/crio-0cb015f6ee55a7614c7408b749cef9306a35bfa503f9eb6c280d3246a8477677 WatchSource:0}: Error finding container 0cb015f6ee55a7614c7408b749cef9306a35bfa503f9eb6c280d3246a8477677: Status 404 returned error can't find the container with id 0cb015f6ee55a7614c7408b749cef9306a35bfa503f9eb6c280d3246a8477677
	Oct 18 09:32:53 no-preload-886951 kubelet[770]: W1018 09:32:53.182422     770 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/53265fd5269c62ddd549523e48a4341815211a2547f2f5f74191b59029ce1244/crio-41aee173dc75539e69e498c7b0cb3e857ac90f6f342bc920b71b7798252b487e WatchSource:0}: Error finding container 41aee173dc75539e69e498c7b0cb3e857ac90f6f342bc920b71b7798252b487e: Status 404 returned error can't find the container with id 41aee173dc75539e69e498c7b0cb3e857ac90f6f342bc920b71b7798252b487e
	Oct 18 09:32:54 no-preload-886951 kubelet[770]: I1018 09:32:54.919238     770 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 18 09:32:57 no-preload-886951 kubelet[770]: I1018 09:32:57.722359     770 scope.go:117] "RemoveContainer" containerID="985a7e6b5de44288deaa53c017ccce7f8d4b4d7c9254ac57bd60ad05381028ff"
	Oct 18 09:32:58 no-preload-886951 kubelet[770]: I1018 09:32:58.727580     770 scope.go:117] "RemoveContainer" containerID="985a7e6b5de44288deaa53c017ccce7f8d4b4d7c9254ac57bd60ad05381028ff"
	Oct 18 09:32:58 no-preload-886951 kubelet[770]: I1018 09:32:58.727953     770 scope.go:117] "RemoveContainer" containerID="438ebca390f66b7bca0b8d6453e36b15a399db3c0c936febc97e303e93c6215c"
	Oct 18 09:32:58 no-preload-886951 kubelet[770]: E1018 09:32:58.728118     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-p4dqv_kubernetes-dashboard(f7c54360-20fd-4379-99d3-99b644351635)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p4dqv" podUID="f7c54360-20fd-4379-99d3-99b644351635"
	Oct 18 09:32:59 no-preload-886951 kubelet[770]: I1018 09:32:59.731921     770 scope.go:117] "RemoveContainer" containerID="438ebca390f66b7bca0b8d6453e36b15a399db3c0c936febc97e303e93c6215c"
	Oct 18 09:32:59 no-preload-886951 kubelet[770]: E1018 09:32:59.732044     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-p4dqv_kubernetes-dashboard(f7c54360-20fd-4379-99d3-99b644351635)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p4dqv" podUID="f7c54360-20fd-4379-99d3-99b644351635"
	Oct 18 09:33:03 no-preload-886951 kubelet[770]: I1018 09:33:03.090520     770 scope.go:117] "RemoveContainer" containerID="438ebca390f66b7bca0b8d6453e36b15a399db3c0c936febc97e303e93c6215c"
	Oct 18 09:33:03 no-preload-886951 kubelet[770]: E1018 09:33:03.090704     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-p4dqv_kubernetes-dashboard(f7c54360-20fd-4379-99d3-99b644351635)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p4dqv" podUID="f7c54360-20fd-4379-99d3-99b644351635"
	Oct 18 09:33:17 no-preload-886951 kubelet[770]: I1018 09:33:17.604702     770 scope.go:117] "RemoveContainer" containerID="438ebca390f66b7bca0b8d6453e36b15a399db3c0c936febc97e303e93c6215c"
	Oct 18 09:33:17 no-preload-886951 kubelet[770]: I1018 09:33:17.784404     770 scope.go:117] "RemoveContainer" containerID="438ebca390f66b7bca0b8d6453e36b15a399db3c0c936febc97e303e93c6215c"
	Oct 18 09:33:17 no-preload-886951 kubelet[770]: I1018 09:33:17.785047     770 scope.go:117] "RemoveContainer" containerID="6fe9487b048d325f2717ac2cb4ee4019221e040bc84f42632e57d631c864c681"
	Oct 18 09:33:17 no-preload-886951 kubelet[770]: E1018 09:33:17.785366     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-p4dqv_kubernetes-dashboard(f7c54360-20fd-4379-99d3-99b644351635)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p4dqv" podUID="f7c54360-20fd-4379-99d3-99b644351635"
	Oct 18 09:33:17 no-preload-886951 kubelet[770]: I1018 09:33:17.832666     770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-smc6z" podStartSLOduration=17.210757699 podStartE2EDuration="25.832649111s" podCreationTimestamp="2025-10-18 09:32:52 +0000 UTC" firstStartedPulling="2025-10-18 09:32:53.190738587 +0000 UTC m=+10.798534767" lastFinishedPulling="2025-10-18 09:33:01.81263 +0000 UTC m=+19.420426179" observedRunningTime="2025-10-18 09:33:02.754540139 +0000 UTC m=+20.362336327" watchObservedRunningTime="2025-10-18 09:33:17.832649111 +0000 UTC m=+35.440445299"
	Oct 18 09:33:19 no-preload-886951 kubelet[770]: I1018 09:33:19.793781     770 scope.go:117] "RemoveContainer" containerID="c1c8eeb2955365fe9513d621ef316f0153e8d1875eecd9d5277bde4191548620"
	Oct 18 09:33:23 no-preload-886951 kubelet[770]: I1018 09:33:23.090602     770 scope.go:117] "RemoveContainer" containerID="6fe9487b048d325f2717ac2cb4ee4019221e040bc84f42632e57d631c864c681"
	Oct 18 09:33:23 no-preload-886951 kubelet[770]: E1018 09:33:23.090761     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-p4dqv_kubernetes-dashboard(f7c54360-20fd-4379-99d3-99b644351635)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p4dqv" podUID="f7c54360-20fd-4379-99d3-99b644351635"
	Oct 18 09:33:35 no-preload-886951 kubelet[770]: I1018 09:33:35.605516     770 scope.go:117] "RemoveContainer" containerID="6fe9487b048d325f2717ac2cb4ee4019221e040bc84f42632e57d631c864c681"
	Oct 18 09:33:35 no-preload-886951 kubelet[770]: E1018 09:33:35.605731     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-p4dqv_kubernetes-dashboard(f7c54360-20fd-4379-99d3-99b644351635)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p4dqv" podUID="f7c54360-20fd-4379-99d3-99b644351635"
	Oct 18 09:33:39 no-preload-886951 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 09:33:39 no-preload-886951 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 09:33:39 no-preload-886951 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [3c99758e0b671868c66b76b5f6341a5c4d2743886ca97fd1ad90d31b840aeea0] <==
	2025/10/18 09:33:01 Starting overwatch
	2025/10/18 09:33:01 Using namespace: kubernetes-dashboard
	2025/10/18 09:33:01 Using in-cluster config to connect to apiserver
	2025/10/18 09:33:01 Using secret token for csrf signing
	2025/10/18 09:33:01 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 09:33:01 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 09:33:01 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 09:33:01 Generating JWE encryption key
	2025/10/18 09:33:01 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 09:33:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 09:33:02 Initializing JWE encryption key from synchronized object
	2025/10/18 09:33:02 Creating in-cluster Sidecar client
	2025/10/18 09:33:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 09:33:02 Serving insecurely on HTTP port: 9090
	2025/10/18 09:33:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [4c32c80c458b3f70a26b5257c09d568bd64ccf7a157833322483f6a31d1a7512] <==
	I1018 09:33:19.869232       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 09:33:19.892042       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 09:33:19.892097       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 09:33:19.897700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:33:23.353232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:33:27.613907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:33:31.211828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:33:34.265018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:33:37.286714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:33:37.291495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:33:37.291639       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 09:33:37.291811       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-886951_50bbf680-6625-40ee-aea5-a02aa1d95183!
	I1018 09:33:37.292758       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a9182a57-e5d4-477c-a0cb-d3046b198831", APIVersion:"v1", ResourceVersion:"676", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-886951_50bbf680-6625-40ee-aea5-a02aa1d95183 became leader
	W1018 09:33:37.299998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:33:37.303132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:33:37.392650       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-886951_50bbf680-6625-40ee-aea5-a02aa1d95183!
	W1018 09:33:39.306196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:33:39.311092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:33:41.314220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:33:41.319186       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c1c8eeb2955365fe9513d621ef316f0153e8d1875eecd9d5277bde4191548620] <==
	I1018 09:32:49.499221       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 09:33:19.503185       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-886951 -n no-preload-886951
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-886951 -n no-preload-886951: exit status 2 (358.29437ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-886951 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-886951
helpers_test.go:243: (dbg) docker inspect no-preload-886951:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "53265fd5269c62ddd549523e48a4341815211a2547f2f5f74191b59029ce1244",
	        "Created": "2025-10-18T09:30:57.518122221Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1468274,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:32:35.619107214Z",
	            "FinishedAt": "2025-10-18T09:32:34.792023238Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/53265fd5269c62ddd549523e48a4341815211a2547f2f5f74191b59029ce1244/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/53265fd5269c62ddd549523e48a4341815211a2547f2f5f74191b59029ce1244/hostname",
	        "HostsPath": "/var/lib/docker/containers/53265fd5269c62ddd549523e48a4341815211a2547f2f5f74191b59029ce1244/hosts",
	        "LogPath": "/var/lib/docker/containers/53265fd5269c62ddd549523e48a4341815211a2547f2f5f74191b59029ce1244/53265fd5269c62ddd549523e48a4341815211a2547f2f5f74191b59029ce1244-json.log",
	        "Name": "/no-preload-886951",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-886951:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-886951",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "53265fd5269c62ddd549523e48a4341815211a2547f2f5f74191b59029ce1244",
	                "LowerDir": "/var/lib/docker/overlay2/dabda70568a757d99fa96e3a83030950f504116ab4097bb4c8b4336b13256c1f-init/diff:/var/lib/docker/overlay2/60519750c2737db0dfdb37adf468f6414129c2b5dcb7218ea59afbc63bfd537f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dabda70568a757d99fa96e3a83030950f504116ab4097bb4c8b4336b13256c1f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dabda70568a757d99fa96e3a83030950f504116ab4097bb4c8b4336b13256c1f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dabda70568a757d99fa96e3a83030950f504116ab4097bb4c8b4336b13256c1f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-886951",
	                "Source": "/var/lib/docker/volumes/no-preload-886951/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-886951",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-886951",
	                "name.minikube.sigs.k8s.io": "no-preload-886951",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2b5ccb14c46356b20c59d22612f2656312c5aab6841f54604aa454aaaaa5321a",
	            "SandboxKey": "/var/run/docker/netns/2b5ccb14c463",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34891"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34892"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34895"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34893"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34894"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-886951": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:ed:d4:de:f4:4e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3e5f60352a068e220fd2810f4516a5014f16c78647f632d14d145d4ec80d9b4f",
	                    "EndpointID": "7146692b9ff75e4a9f15b31744ba1759a84edb39bcae46b34e67a07b0baf1565",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-886951",
	                        "53265fd5269c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-886951 -n no-preload-886951
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-886951 -n no-preload-886951: exit status 2 (357.385546ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-886951 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-886951 logs -n 25: (1.2540973s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cert-options-783705 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-783705    │ jenkins │ v1.37.0 │ 18 Oct 25 09:28 UTC │ 18 Oct 25 09:28 UTC │
	│ delete  │ -p cert-options-783705                                                                                                                                                                                                                        │ cert-options-783705    │ jenkins │ v1.37.0 │ 18 Oct 25 09:28 UTC │ 18 Oct 25 09:28 UTC │
	│ start   │ -p old-k8s-version-136598 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-136598 │ jenkins │ v1.37.0 │ 18 Oct 25 09:28 UTC │ 18 Oct 25 09:29 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-136598 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-136598 │ jenkins │ v1.37.0 │ 18 Oct 25 09:29 UTC │                     │
	│ stop    │ -p old-k8s-version-136598 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-136598 │ jenkins │ v1.37.0 │ 18 Oct 25 09:29 UTC │ 18 Oct 25 09:29 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-136598 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-136598 │ jenkins │ v1.37.0 │ 18 Oct 25 09:29 UTC │ 18 Oct 25 09:29 UTC │
	│ start   │ -p old-k8s-version-136598 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-136598 │ jenkins │ v1.37.0 │ 18 Oct 25 09:29 UTC │ 18 Oct 25 09:30 UTC │
	│ start   │ -p cert-expiration-854768 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-854768 │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │ 18 Oct 25 09:30 UTC │
	│ delete  │ -p cert-expiration-854768                                                                                                                                                                                                                     │ cert-expiration-854768 │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │ 18 Oct 25 09:30 UTC │
	│ image   │ old-k8s-version-136598 image list --format=json                                                                                                                                                                                               │ old-k8s-version-136598 │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │ 18 Oct 25 09:30 UTC │
	│ pause   │ -p old-k8s-version-136598 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-136598 │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │                     │
	│ start   │ -p no-preload-886951 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-886951      │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │ 18 Oct 25 09:32 UTC │
	│ delete  │ -p old-k8s-version-136598                                                                                                                                                                                                                     │ old-k8s-version-136598 │ jenkins │ v1.37.0 │ 18 Oct 25 09:31 UTC │ 18 Oct 25 09:31 UTC │
	│ delete  │ -p old-k8s-version-136598                                                                                                                                                                                                                     │ old-k8s-version-136598 │ jenkins │ v1.37.0 │ 18 Oct 25 09:31 UTC │ 18 Oct 25 09:31 UTC │
	│ start   │ -p embed-certs-559379 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-559379     │ jenkins │ v1.37.0 │ 18 Oct 25 09:31 UTC │ 18 Oct 25 09:32 UTC │
	│ addons  │ enable metrics-server -p no-preload-886951 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-886951      │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │                     │
	│ stop    │ -p no-preload-886951 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-886951      │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │ 18 Oct 25 09:32 UTC │
	│ addons  │ enable dashboard -p no-preload-886951 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-886951      │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │ 18 Oct 25 09:32 UTC │
	│ start   │ -p no-preload-886951 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-886951      │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │ 18 Oct 25 09:33 UTC │
	│ addons  │ enable metrics-server -p embed-certs-559379 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-559379     │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │                     │
	│ stop    │ -p embed-certs-559379 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-559379     │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │ 18 Oct 25 09:33 UTC │
	│ addons  │ enable dashboard -p embed-certs-559379 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-559379     │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:33 UTC │
	│ start   │ -p embed-certs-559379 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-559379     │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │                     │
	│ image   │ no-preload-886951 image list --format=json                                                                                                                                                                                                    │ no-preload-886951      │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:33 UTC │
	│ pause   │ -p no-preload-886951 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-886951      │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:33:04
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:33:04.244338 1471064 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:33:04.244477 1471064 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:33:04.244489 1471064 out.go:374] Setting ErrFile to fd 2...
	I1018 09:33:04.244495 1471064 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:33:04.244780 1471064 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 09:33:04.245206 1471064 out.go:368] Setting JSON to false
	I1018 09:33:04.246237 1471064 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":40532,"bootTime":1760739453,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1018 09:33:04.246304 1471064 start.go:141] virtualization:  
	I1018 09:33:04.251238 1471064 out.go:179] * [embed-certs-559379] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 09:33:04.254257 1471064 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 09:33:04.254329 1471064 notify.go:220] Checking for updates...
	I1018 09:33:04.259915 1471064 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:33:04.262943 1471064 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 09:33:04.265873 1471064 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-1274243/.minikube
	I1018 09:33:04.268797 1471064 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 09:33:04.271831 1471064 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:33:04.275137 1471064 config.go:182] Loaded profile config "embed-certs-559379": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:33:04.275734 1471064 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:33:04.303075 1471064 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 09:33:04.303185 1471064 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:33:04.360091 1471064 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-18 09:33:04.351114774 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:33:04.360204 1471064 docker.go:318] overlay module found
	I1018 09:33:04.363411 1471064 out.go:179] * Using the docker driver based on existing profile
	I1018 09:33:04.366326 1471064 start.go:305] selected driver: docker
	I1018 09:33:04.366346 1471064 start.go:925] validating driver "docker" against &{Name:embed-certs-559379 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-559379 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:33:04.366443 1471064 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:33:04.367186 1471064 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:33:04.428070 1471064 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-18 09:33:04.418834832 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:33:04.428418 1471064 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:33:04.428447 1471064 cni.go:84] Creating CNI manager for ""
	I1018 09:33:04.428508 1471064 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:33:04.428560 1471064 start.go:349] cluster config:
	{Name:embed-certs-559379 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-559379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:33:04.431876 1471064 out.go:179] * Starting "embed-certs-559379" primary control-plane node in "embed-certs-559379" cluster
	I1018 09:33:04.434770 1471064 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:33:04.438491 1471064 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:33:04.441327 1471064 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:33:04.441442 1471064 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 09:33:04.441457 1471064 cache.go:58] Caching tarball of preloaded images
	I1018 09:33:04.441374 1471064 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:33:04.441544 1471064 preload.go:233] Found /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 09:33:04.441555 1471064 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 09:33:04.441670 1471064 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379/config.json ...
	I1018 09:33:04.460858 1471064 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:33:04.460880 1471064 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:33:04.460897 1471064 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:33:04.460926 1471064 start.go:360] acquireMachinesLock for embed-certs-559379: {Name:mk418755d6e5d94c4c79fcae2f644d56877c0df8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:33:04.461085 1471064 start.go:364] duration metric: took 134.88µs to acquireMachinesLock for "embed-certs-559379"
	I1018 09:33:04.461112 1471064 start.go:96] Skipping create...Using existing machine configuration
	I1018 09:33:04.461124 1471064 fix.go:54] fixHost starting: 
	I1018 09:33:04.461393 1471064 cli_runner.go:164] Run: docker container inspect embed-certs-559379 --format={{.State.Status}}
	I1018 09:33:04.477582 1471064 fix.go:112] recreateIfNeeded on embed-certs-559379: state=Stopped err=<nil>
	W1018 09:33:04.477622 1471064 fix.go:138] unexpected machine state, will restart: <nil>
	W1018 09:33:01.936396 1468145 pod_ready.go:104] pod "coredns-66bc5c9577-l2rmq" is not "Ready", error: <nil>
	W1018 09:33:04.436914 1468145 pod_ready.go:104] pod "coredns-66bc5c9577-l2rmq" is not "Ready", error: <nil>
	I1018 09:33:04.480760 1471064 out.go:252] * Restarting existing docker container for "embed-certs-559379" ...
	I1018 09:33:04.480850 1471064 cli_runner.go:164] Run: docker start embed-certs-559379
	I1018 09:33:04.728928 1471064 cli_runner.go:164] Run: docker container inspect embed-certs-559379 --format={{.State.Status}}
	I1018 09:33:04.749787 1471064 kic.go:430] container "embed-certs-559379" state is running.
	I1018 09:33:04.750145 1471064 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-559379
	I1018 09:33:04.772683 1471064 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379/config.json ...
	I1018 09:33:04.772909 1471064 machine.go:93] provisionDockerMachine start ...
	I1018 09:33:04.772986 1471064 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-559379
	I1018 09:33:04.794344 1471064 main.go:141] libmachine: Using SSH client type: native
	I1018 09:33:04.794898 1471064 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34896 <nil> <nil>}
	I1018 09:33:04.794915 1471064 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:33:04.795607 1471064 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 09:33:07.947582 1471064 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-559379
	
	I1018 09:33:07.947678 1471064 ubuntu.go:182] provisioning hostname "embed-certs-559379"
	I1018 09:33:07.947777 1471064 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-559379
	I1018 09:33:07.965567 1471064 main.go:141] libmachine: Using SSH client type: native
	I1018 09:33:07.965882 1471064 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34896 <nil> <nil>}
	I1018 09:33:07.965899 1471064 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-559379 && echo "embed-certs-559379" | sudo tee /etc/hostname
	I1018 09:33:08.128833 1471064 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-559379
	
	I1018 09:33:08.128936 1471064 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-559379
	I1018 09:33:08.146515 1471064 main.go:141] libmachine: Using SSH client type: native
	I1018 09:33:08.146823 1471064 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34896 <nil> <nil>}
	I1018 09:33:08.146845 1471064 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-559379' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-559379/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-559379' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:33:08.291974 1471064 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:33:08.291999 1471064 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-1274243/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-1274243/.minikube}
	I1018 09:33:08.292032 1471064 ubuntu.go:190] setting up certificates
	I1018 09:33:08.292044 1471064 provision.go:84] configureAuth start
	I1018 09:33:08.292110 1471064 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-559379
	I1018 09:33:08.309908 1471064 provision.go:143] copyHostCerts
	I1018 09:33:08.309975 1471064 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem, removing ...
	I1018 09:33:08.309997 1471064 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem
	I1018 09:33:08.310075 1471064 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem (1078 bytes)
	I1018 09:33:08.310186 1471064 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem, removing ...
	I1018 09:33:08.310197 1471064 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem
	I1018 09:33:08.310226 1471064 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem (1123 bytes)
	I1018 09:33:08.310286 1471064 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem, removing ...
	I1018 09:33:08.310297 1471064 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem
	I1018 09:33:08.310323 1471064 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem (1675 bytes)
	I1018 09:33:08.310377 1471064 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem org=jenkins.embed-certs-559379 san=[127.0.0.1 192.168.76.2 embed-certs-559379 localhost minikube]
	W1018 09:33:06.936736 1468145 pod_ready.go:104] pod "coredns-66bc5c9577-l2rmq" is not "Ready", error: <nil>
	W1018 09:33:08.938593 1468145 pod_ready.go:104] pod "coredns-66bc5c9577-l2rmq" is not "Ready", error: <nil>
	I1018 09:33:09.458705 1471064 provision.go:177] copyRemoteCerts
	I1018 09:33:09.458783 1471064 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:33:09.458832 1471064 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-559379
	I1018 09:33:09.475395 1471064 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34896 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/embed-certs-559379/id_rsa Username:docker}
	I1018 09:33:09.579537 1471064 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:33:09.598201 1471064 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1018 09:33:09.617496 1471064 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:33:09.636861 1471064 provision.go:87] duration metric: took 1.344788573s to configureAuth
	I1018 09:33:09.636943 1471064 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:33:09.637176 1471064 config.go:182] Loaded profile config "embed-certs-559379": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:33:09.637281 1471064 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-559379
	I1018 09:33:09.654247 1471064 main.go:141] libmachine: Using SSH client type: native
	I1018 09:33:09.654558 1471064 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34896 <nil> <nil>}
	I1018 09:33:09.654577 1471064 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:33:09.989773 1471064 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:33:09.989802 1471064 machine.go:96] duration metric: took 5.21688175s to provisionDockerMachine
	I1018 09:33:09.989814 1471064 start.go:293] postStartSetup for "embed-certs-559379" (driver="docker")
	I1018 09:33:09.989850 1471064 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:33:09.989952 1471064 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:33:09.990018 1471064 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-559379
	I1018 09:33:10.016475 1471064 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34896 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/embed-certs-559379/id_rsa Username:docker}
	I1018 09:33:10.128401 1471064 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:33:10.132147 1471064 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:33:10.132181 1471064 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:33:10.132201 1471064 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-1274243/.minikube/addons for local assets ...
	I1018 09:33:10.132290 1471064 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-1274243/.minikube/files for local assets ...
	I1018 09:33:10.132397 1471064 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem -> 12760972.pem in /etc/ssl/certs
	I1018 09:33:10.132529 1471064 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:33:10.140550 1471064 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem --> /etc/ssl/certs/12760972.pem (1708 bytes)
	I1018 09:33:10.158998 1471064 start.go:296] duration metric: took 169.167185ms for postStartSetup
	I1018 09:33:10.159079 1471064 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:33:10.159126 1471064 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-559379
	I1018 09:33:10.176644 1471064 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34896 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/embed-certs-559379/id_rsa Username:docker}
	I1018 09:33:10.276978 1471064 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:33:10.281640 1471064 fix.go:56] duration metric: took 5.820508176s for fixHost
	I1018 09:33:10.281667 1471064 start.go:83] releasing machines lock for "embed-certs-559379", held for 5.820569319s
	I1018 09:33:10.281743 1471064 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-559379
	I1018 09:33:10.299273 1471064 ssh_runner.go:195] Run: cat /version.json
	I1018 09:33:10.299323 1471064 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-559379
	I1018 09:33:10.299330 1471064 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:33:10.299392 1471064 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-559379
	I1018 09:33:10.317724 1471064 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34896 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/embed-certs-559379/id_rsa Username:docker}
	I1018 09:33:10.320150 1471064 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34896 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/embed-certs-559379/id_rsa Username:docker}
	I1018 09:33:10.528558 1471064 ssh_runner.go:195] Run: systemctl --version
	I1018 09:33:10.535100 1471064 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:33:10.570927 1471064 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:33:10.575901 1471064 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:33:10.575982 1471064 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:33:10.583568 1471064 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 09:33:10.583595 1471064 start.go:495] detecting cgroup driver to use...
	I1018 09:33:10.583625 1471064 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 09:33:10.583673 1471064 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:33:10.598998 1471064 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:33:10.612585 1471064 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:33:10.612665 1471064 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:33:10.627884 1471064 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:33:10.641885 1471064 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:33:10.763791 1471064 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:33:10.885339 1471064 docker.go:234] disabling docker service ...
	I1018 09:33:10.885407 1471064 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:33:10.901655 1471064 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:33:10.915118 1471064 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:33:11.046575 1471064 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:33:11.172548 1471064 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:33:11.187439 1471064 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:33:11.201695 1471064 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:33:11.201793 1471064 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:33:11.211537 1471064 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 09:33:11.211607 1471064 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:33:11.221318 1471064 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:33:11.230188 1471064 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:33:11.239137 1471064 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:33:11.247934 1471064 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:33:11.257235 1471064 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:33:11.271292 1471064 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:33:11.280389 1471064 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:33:11.288151 1471064 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:33:11.295468 1471064 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:33:11.415987 1471064 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:33:11.554210 1471064 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:33:11.554294 1471064 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:33:11.557974 1471064 start.go:563] Will wait 60s for crictl version
	I1018 09:33:11.558084 1471064 ssh_runner.go:195] Run: which crictl
	I1018 09:33:11.561602 1471064 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:33:11.589957 1471064 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:33:11.590115 1471064 ssh_runner.go:195] Run: crio --version
	I1018 09:33:11.619324 1471064 ssh_runner.go:195] Run: crio --version
	I1018 09:33:11.651781 1471064 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:33:11.654563 1471064 cli_runner.go:164] Run: docker network inspect embed-certs-559379 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:33:11.670565 1471064 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 09:33:11.674415 1471064 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:33:11.684407 1471064 kubeadm.go:883] updating cluster {Name:embed-certs-559379 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-559379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:33:11.684523 1471064 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:33:11.684578 1471064 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:33:11.721230 1471064 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:33:11.721254 1471064 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:33:11.721309 1471064 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:33:11.747273 1471064 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:33:11.747297 1471064 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:33:11.747305 1471064 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1018 09:33:11.747401 1471064 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-559379 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-559379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:33:11.747485 1471064 ssh_runner.go:195] Run: crio config
	I1018 09:33:11.806970 1471064 cni.go:84] Creating CNI manager for ""
	I1018 09:33:11.806993 1471064 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:33:11.807012 1471064 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:33:11.807034 1471064 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-559379 NodeName:embed-certs-559379 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:33:11.807166 1471064 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-559379"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:33:11.807241 1471064 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:33:11.815112 1471064 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:33:11.815179 1471064 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:33:11.822498 1471064 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1018 09:33:11.834982 1471064 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:33:11.847811 1471064 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1018 09:33:11.861785 1471064 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:33:11.865531 1471064 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:33:11.874939 1471064 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:33:11.997005 1471064 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:33:12.015870 1471064 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379 for IP: 192.168.76.2
	I1018 09:33:12.015893 1471064 certs.go:195] generating shared ca certs ...
	I1018 09:33:12.015926 1471064 certs.go:227] acquiring lock for ca certs: {Name:mk38b7543cc5a8f0209b20a590824c94ad8d0609 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:33:12.016082 1471064 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.key
	I1018 09:33:12.016129 1471064 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.key
	I1018 09:33:12.016145 1471064 certs.go:257] generating profile certs ...
	I1018 09:33:12.016237 1471064 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379/client.key
	I1018 09:33:12.016292 1471064 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379/apiserver.key.9dbb2352
	I1018 09:33:12.016335 1471064 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379/proxy-client.key
	I1018 09:33:12.016456 1471064 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097.pem (1338 bytes)
	W1018 09:33:12.016491 1471064 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097_empty.pem, impossibly tiny 0 bytes
	I1018 09:33:12.016505 1471064 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:33:12.016528 1471064 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:33:12.016554 1471064 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:33:12.016581 1471064 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem (1675 bytes)
	I1018 09:33:12.016631 1471064 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem (1708 bytes)
	I1018 09:33:12.017256 1471064 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:33:12.042042 1471064 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 09:33:12.062375 1471064 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:33:12.082194 1471064 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:33:12.114465 1471064 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1018 09:33:12.136168 1471064 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 09:33:12.158249 1471064 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:33:12.194668 1471064 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/embed-certs-559379/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 09:33:12.214751 1471064 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:33:12.235039 1471064 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097.pem --> /usr/share/ca-certificates/1276097.pem (1338 bytes)
	I1018 09:33:12.252778 1471064 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem --> /usr/share/ca-certificates/12760972.pem (1708 bytes)
	I1018 09:33:12.270510 1471064 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:33:12.283890 1471064 ssh_runner.go:195] Run: openssl version
	I1018 09:33:12.290440 1471064 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:33:12.299311 1471064 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:33:12.303278 1471064 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:33:12.303341 1471064 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:33:12.347337 1471064 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:33:12.355029 1471064 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1276097.pem && ln -fs /usr/share/ca-certificates/1276097.pem /etc/ssl/certs/1276097.pem"
	I1018 09:33:12.363750 1471064 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1276097.pem
	I1018 09:33:12.368471 1471064 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 08:36 /usr/share/ca-certificates/1276097.pem
	I1018 09:33:12.368566 1471064 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1276097.pem
	I1018 09:33:12.410009 1471064 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1276097.pem /etc/ssl/certs/51391683.0"
	I1018 09:33:12.417993 1471064 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12760972.pem && ln -fs /usr/share/ca-certificates/12760972.pem /etc/ssl/certs/12760972.pem"
	I1018 09:33:12.426306 1471064 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12760972.pem
	I1018 09:33:12.430543 1471064 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 08:36 /usr/share/ca-certificates/12760972.pem
	I1018 09:33:12.430617 1471064 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12760972.pem
	I1018 09:33:12.472438 1471064 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12760972.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:33:12.480183 1471064 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:33:12.483718 1471064 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 09:33:12.526096 1471064 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 09:33:12.572260 1471064 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 09:33:12.614442 1471064 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 09:33:12.677036 1471064 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 09:33:12.747948 1471064 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 09:33:12.815768 1471064 kubeadm.go:400] StartCluster: {Name:embed-certs-559379 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-559379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:33:12.815986 1471064 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:33:12.816085 1471064 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:33:12.883757 1471064 cri.go:89] found id: "107f423c474214a77b70bea579d8693f96941573e52099aab36ca04cad80b9fb"
	I1018 09:33:12.883834 1471064 cri.go:89] found id: "9e4bebc346e34245095acfdc99e4bf27d586ba1008354824cc3842710f552d3d"
	I1018 09:33:12.883868 1471064 cri.go:89] found id: "1fa42435e829fa1ff7a0af9be9dc7035e7cc16ae52106466d057fafcbaf6e9bb"
	I1018 09:33:12.883897 1471064 cri.go:89] found id: "836750ba877589f7642d95bcc7eaea0db209e4198f52173d3d62e2a5392defad"
	I1018 09:33:12.883916 1471064 cri.go:89] found id: ""
	I1018 09:33:12.883987 1471064 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 09:33:12.907215 1471064 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:33:12Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:33:12.907338 1471064 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:33:12.922690 1471064 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 09:33:12.922750 1471064 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 09:33:12.922825 1471064 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 09:33:12.947230 1471064 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:33:12.947964 1471064 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-559379" does not appear in /home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 09:33:12.948293 1471064 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-1274243/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-559379" cluster setting kubeconfig missing "embed-certs-559379" context setting]
	I1018 09:33:12.948795 1471064 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/kubeconfig: {Name:mk4be33efdd3c492a070116d2c0b6f8b234870aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:33:12.950418 1471064 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 09:33:12.962052 1471064 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1018 09:33:12.962086 1471064 kubeadm.go:601] duration metric: took 39.307459ms to restartPrimaryControlPlane
	I1018 09:33:12.962126 1471064 kubeadm.go:402] duration metric: took 146.368172ms to StartCluster
	I1018 09:33:12.962142 1471064 settings.go:142] acquiring lock: {Name:mk4240955247baece9b9f2d9a5a8e189cb089184 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:33:12.962232 1471064 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 09:33:12.963511 1471064 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/kubeconfig: {Name:mk4be33efdd3c492a070116d2c0b6f8b234870aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:33:12.963791 1471064 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:33:12.964359 1471064 config.go:182] Loaded profile config "embed-certs-559379": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:33:12.964401 1471064 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:33:12.964460 1471064 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-559379"
	I1018 09:33:12.964478 1471064 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-559379"
	W1018 09:33:12.964484 1471064 addons.go:247] addon storage-provisioner should already be in state true
	I1018 09:33:12.964505 1471064 host.go:66] Checking if "embed-certs-559379" exists ...
	I1018 09:33:12.964984 1471064 cli_runner.go:164] Run: docker container inspect embed-certs-559379 --format={{.State.Status}}
	I1018 09:33:12.965158 1471064 addons.go:69] Setting dashboard=true in profile "embed-certs-559379"
	I1018 09:33:12.965187 1471064 addons.go:238] Setting addon dashboard=true in "embed-certs-559379"
	W1018 09:33:12.965209 1471064 addons.go:247] addon dashboard should already be in state true
	I1018 09:33:12.965254 1471064 host.go:66] Checking if "embed-certs-559379" exists ...
	I1018 09:33:12.965687 1471064 cli_runner.go:164] Run: docker container inspect embed-certs-559379 --format={{.State.Status}}
	I1018 09:33:12.968377 1471064 addons.go:69] Setting default-storageclass=true in profile "embed-certs-559379"
	I1018 09:33:12.968412 1471064 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-559379"
	I1018 09:33:12.974509 1471064 out.go:179] * Verifying Kubernetes components...
	I1018 09:33:12.974905 1471064 cli_runner.go:164] Run: docker container inspect embed-certs-559379 --format={{.State.Status}}
	I1018 09:33:12.981796 1471064 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:33:13.017924 1471064 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 09:33:13.021842 1471064 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 09:33:13.024987 1471064 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 09:33:13.025017 1471064 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 09:33:13.025103 1471064 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-559379
	I1018 09:33:13.036943 1471064 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:33:13.043432 1471064 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:33:13.043467 1471064 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:33:13.043545 1471064 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-559379
	I1018 09:33:13.062354 1471064 addons.go:238] Setting addon default-storageclass=true in "embed-certs-559379"
	W1018 09:33:13.062385 1471064 addons.go:247] addon default-storageclass should already be in state true
	I1018 09:33:13.062409 1471064 host.go:66] Checking if "embed-certs-559379" exists ...
	I1018 09:33:13.062821 1471064 cli_runner.go:164] Run: docker container inspect embed-certs-559379 --format={{.State.Status}}
	I1018 09:33:13.096021 1471064 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34896 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/embed-certs-559379/id_rsa Username:docker}
	I1018 09:33:13.108777 1471064 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34896 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/embed-certs-559379/id_rsa Username:docker}
	I1018 09:33:13.127666 1471064 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:33:13.127699 1471064 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:33:13.127761 1471064 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-559379
	I1018 09:33:13.157434 1471064 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34896 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/embed-certs-559379/id_rsa Username:docker}
	I1018 09:33:13.384004 1471064 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:33:13.391431 1471064 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 09:33:13.391457 1471064 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 09:33:13.419346 1471064 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:33:13.423901 1471064 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:33:13.447513 1471064 node_ready.go:35] waiting up to 6m0s for node "embed-certs-559379" to be "Ready" ...
	I1018 09:33:13.449087 1471064 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 09:33:13.449113 1471064 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 09:33:13.544223 1471064 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 09:33:13.544249 1471064 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 09:33:13.609966 1471064 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 09:33:13.609997 1471064 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 09:33:13.666426 1471064 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 09:33:13.666452 1471064 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 09:33:13.698459 1471064 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 09:33:13.698486 1471064 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 09:33:13.715891 1471064 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 09:33:13.715913 1471064 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 09:33:13.728554 1471064 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 09:33:13.728579 1471064 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 09:33:13.742636 1471064 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:33:13.742662 1471064 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 09:33:13.757270 1471064 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1018 09:33:10.941428 1468145 pod_ready.go:104] pod "coredns-66bc5c9577-l2rmq" is not "Ready", error: <nil>
	W1018 09:33:13.437208 1468145 pod_ready.go:104] pod "coredns-66bc5c9577-l2rmq" is not "Ready", error: <nil>
	I1018 09:33:17.724427 1471064 node_ready.go:49] node "embed-certs-559379" is "Ready"
	I1018 09:33:17.724468 1471064 node_ready.go:38] duration metric: took 4.276905534s for node "embed-certs-559379" to be "Ready" ...
	I1018 09:33:17.724492 1471064 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:33:17.724575 1471064 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:33:19.566885 1471064 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.147503177s)
	I1018 09:33:19.566930 1471064 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.14300375s)
	I1018 09:33:19.650074 1471064 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.892758176s)
	I1018 09:33:19.650274 1471064 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.925682992s)
	I1018 09:33:19.650310 1471064 api_server.go:72] duration metric: took 6.686487227s to wait for apiserver process to appear ...
	I1018 09:33:19.650330 1471064 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:33:19.650360 1471064 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:33:19.653217 1471064 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-559379 addons enable metrics-server
	
	I1018 09:33:19.656090 1471064 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	W1018 09:33:15.437884 1468145 pod_ready.go:104] pod "coredns-66bc5c9577-l2rmq" is not "Ready", error: <nil>
	W1018 09:33:17.440550 1468145 pod_ready.go:104] pod "coredns-66bc5c9577-l2rmq" is not "Ready", error: <nil>
	W1018 09:33:19.937860 1468145 pod_ready.go:104] pod "coredns-66bc5c9577-l2rmq" is not "Ready", error: <nil>
	I1018 09:33:19.659040 1471064 addons.go:514] duration metric: took 6.694618652s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1018 09:33:19.663946 1471064 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 09:33:19.667954 1471064 api_server.go:141] control plane version: v1.34.1
	I1018 09:33:19.667975 1471064 api_server.go:131] duration metric: took 17.626922ms to wait for apiserver health ...
	I1018 09:33:19.667984 1471064 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:33:19.685908 1471064 system_pods.go:59] 8 kube-system pods found
	I1018 09:33:19.685981 1471064 system_pods.go:61] "coredns-66bc5c9577-t9blq" [07dead7a-c196-4355-8e63-d7dbe47b07cc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:33:19.686007 1471064 system_pods.go:61] "etcd-embed-certs-559379" [473c810d-3278-481b-ad96-7f200a82f830] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:33:19.686029 1471064 system_pods.go:61] "kindnet-6ltrq" [ca80e038-38ba-42a6-8275-fcc38916c7ca] Running
	I1018 09:33:19.686075 1471064 system_pods.go:61] "kube-apiserver-embed-certs-559379" [ed153ff3-f3bf-44ba-ad22-b935d59b6c38] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:33:19.686096 1471064 system_pods.go:61] "kube-controller-manager-embed-certs-559379" [dadcca5c-657c-42e4-865c-cc21d7af7fbc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:33:19.686118 1471064 system_pods.go:61] "kube-proxy-82pzn" [4d204191-f23a-4031-a37d-a4c1ec529e4c] Running
	I1018 09:33:19.686149 1471064 system_pods.go:61] "kube-scheduler-embed-certs-559379" [0bc4c8ce-35cf-41a6-a6fa-a1834adb12a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:33:19.686169 1471064 system_pods.go:61] "storage-provisioner" [0e85b72f-adef-4429-bf1f-1f003538e5bb] Running
	I1018 09:33:19.686189 1471064 system_pods.go:74] duration metric: took 18.198558ms to wait for pod list to return data ...
	I1018 09:33:19.686209 1471064 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:33:19.705360 1471064 default_sa.go:45] found service account: "default"
	I1018 09:33:19.705425 1471064 default_sa.go:55] duration metric: took 19.195305ms for default service account to be created ...
	I1018 09:33:19.705449 1471064 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 09:33:19.714748 1471064 system_pods.go:86] 8 kube-system pods found
	I1018 09:33:19.714825 1471064 system_pods.go:89] "coredns-66bc5c9577-t9blq" [07dead7a-c196-4355-8e63-d7dbe47b07cc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:33:19.714850 1471064 system_pods.go:89] "etcd-embed-certs-559379" [473c810d-3278-481b-ad96-7f200a82f830] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:33:19.714871 1471064 system_pods.go:89] "kindnet-6ltrq" [ca80e038-38ba-42a6-8275-fcc38916c7ca] Running
	I1018 09:33:19.714908 1471064 system_pods.go:89] "kube-apiserver-embed-certs-559379" [ed153ff3-f3bf-44ba-ad22-b935d59b6c38] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:33:19.714937 1471064 system_pods.go:89] "kube-controller-manager-embed-certs-559379" [dadcca5c-657c-42e4-865c-cc21d7af7fbc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:33:19.714958 1471064 system_pods.go:89] "kube-proxy-82pzn" [4d204191-f23a-4031-a37d-a4c1ec529e4c] Running
	I1018 09:33:19.714989 1471064 system_pods.go:89] "kube-scheduler-embed-certs-559379" [0bc4c8ce-35cf-41a6-a6fa-a1834adb12a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:33:19.715022 1471064 system_pods.go:89] "storage-provisioner" [0e85b72f-adef-4429-bf1f-1f003538e5bb] Running
	I1018 09:33:19.715071 1471064 system_pods.go:126] duration metric: took 9.58082ms to wait for k8s-apps to be running ...
	I1018 09:33:19.715094 1471064 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 09:33:19.715175 1471064 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:33:19.736456 1471064 system_svc.go:56] duration metric: took 21.354736ms WaitForService to wait for kubelet
	I1018 09:33:19.736530 1471064 kubeadm.go:586] duration metric: took 6.772699487s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:33:19.736563 1471064 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:33:19.744765 1471064 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 09:33:19.744835 1471064 node_conditions.go:123] node cpu capacity is 2
	I1018 09:33:19.744863 1471064 node_conditions.go:105] duration metric: took 8.278563ms to run NodePressure ...
	I1018 09:33:19.744886 1471064 start.go:241] waiting for startup goroutines ...
	I1018 09:33:19.744919 1471064 start.go:246] waiting for cluster config update ...
	I1018 09:33:19.744957 1471064 start.go:255] writing updated cluster config ...
	I1018 09:33:19.745274 1471064 ssh_runner.go:195] Run: rm -f paused
	I1018 09:33:19.749849 1471064 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:33:19.755161 1471064 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-t9blq" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 09:33:21.762955 1471064 pod_ready.go:104] pod "coredns-66bc5c9577-t9blq" is not "Ready", error: <nil>
	W1018 09:33:21.946576 1468145 pod_ready.go:104] pod "coredns-66bc5c9577-l2rmq" is not "Ready", error: <nil>
	W1018 09:33:24.436386 1468145 pod_ready.go:104] pod "coredns-66bc5c9577-l2rmq" is not "Ready", error: <nil>
	I1018 09:33:25.437104 1468145 pod_ready.go:94] pod "coredns-66bc5c9577-l2rmq" is "Ready"
	I1018 09:33:25.437133 1468145 pod_ready.go:86] duration metric: took 35.006416252s for pod "coredns-66bc5c9577-l2rmq" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:25.440650 1468145 pod_ready.go:83] waiting for pod "etcd-no-preload-886951" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:25.444817 1468145 pod_ready.go:94] pod "etcd-no-preload-886951" is "Ready"
	I1018 09:33:25.444850 1468145 pod_ready.go:86] duration metric: took 4.16964ms for pod "etcd-no-preload-886951" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:25.448454 1468145 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-886951" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:25.453561 1468145 pod_ready.go:94] pod "kube-apiserver-no-preload-886951" is "Ready"
	I1018 09:33:25.453593 1468145 pod_ready.go:86] duration metric: took 5.104145ms for pod "kube-apiserver-no-preload-886951" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:25.456659 1468145 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-886951" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:25.634401 1468145 pod_ready.go:94] pod "kube-controller-manager-no-preload-886951" is "Ready"
	I1018 09:33:25.634426 1468145 pod_ready.go:86] duration metric: took 177.739016ms for pod "kube-controller-manager-no-preload-886951" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:25.834448 1468145 pod_ready.go:83] waiting for pod "kube-proxy-4gbs9" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:26.235556 1468145 pod_ready.go:94] pod "kube-proxy-4gbs9" is "Ready"
	I1018 09:33:26.235624 1468145 pod_ready.go:86] duration metric: took 401.148143ms for pod "kube-proxy-4gbs9" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:26.434557 1468145 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-886951" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:26.834635 1468145 pod_ready.go:94] pod "kube-scheduler-no-preload-886951" is "Ready"
	I1018 09:33:26.834715 1468145 pod_ready.go:86] duration metric: took 400.087856ms for pod "kube-scheduler-no-preload-886951" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:26.834741 1468145 pod_ready.go:40] duration metric: took 36.408287538s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:33:26.904340 1468145 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 09:33:26.912343 1468145 out.go:179] * Done! kubectl is now configured to use "no-preload-886951" cluster and "default" namespace by default
	W1018 09:33:24.260745 1471064 pod_ready.go:104] pod "coredns-66bc5c9577-t9blq" is not "Ready", error: <nil>
	W1018 09:33:26.260871 1471064 pod_ready.go:104] pod "coredns-66bc5c9577-t9blq" is not "Ready", error: <nil>
	W1018 09:33:28.763832 1471064 pod_ready.go:104] pod "coredns-66bc5c9577-t9blq" is not "Ready", error: <nil>
	W1018 09:33:31.261529 1471064 pod_ready.go:104] pod "coredns-66bc5c9577-t9blq" is not "Ready", error: <nil>
	W1018 09:33:33.761788 1471064 pod_ready.go:104] pod "coredns-66bc5c9577-t9blq" is not "Ready", error: <nil>
	W1018 09:33:36.260997 1471064 pod_ready.go:104] pod "coredns-66bc5c9577-t9blq" is not "Ready", error: <nil>
	W1018 09:33:38.761018 1471064 pod_ready.go:104] pod "coredns-66bc5c9577-t9blq" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 18 09:33:19 no-preload-886951 crio[652]: time="2025-10-18T09:33:19.795361005Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=862f3da6-af6d-4d96-a638-cefd5966ac42 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:33:19 no-preload-886951 crio[652]: time="2025-10-18T09:33:19.796256118Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=7326b57b-bcfb-41dd-9556-3db71ff5b6a0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:33:19 no-preload-886951 crio[652]: time="2025-10-18T09:33:19.796484362Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:33:19 no-preload-886951 crio[652]: time="2025-10-18T09:33:19.808488409Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:33:19 no-preload-886951 crio[652]: time="2025-10-18T09:33:19.808674201Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/02b006731b4f7ca7b03f2ecdc902b1648131b4fc26a0890e020520eb0fa92338/merged/etc/passwd: no such file or directory"
	Oct 18 09:33:19 no-preload-886951 crio[652]: time="2025-10-18T09:33:19.808696756Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/02b006731b4f7ca7b03f2ecdc902b1648131b4fc26a0890e020520eb0fa92338/merged/etc/group: no such file or directory"
	Oct 18 09:33:19 no-preload-886951 crio[652]: time="2025-10-18T09:33:19.808978455Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:33:19 no-preload-886951 crio[652]: time="2025-10-18T09:33:19.842543666Z" level=info msg="Created container 4c32c80c458b3f70a26b5257c09d568bd64ccf7a157833322483f6a31d1a7512: kube-system/storage-provisioner/storage-provisioner" id=7326b57b-bcfb-41dd-9556-3db71ff5b6a0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:33:19 no-preload-886951 crio[652]: time="2025-10-18T09:33:19.844637121Z" level=info msg="Starting container: 4c32c80c458b3f70a26b5257c09d568bd64ccf7a157833322483f6a31d1a7512" id=2f969cd5-d62e-41f4-bf36-2eda7a0025a9 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:33:19 no-preload-886951 crio[652]: time="2025-10-18T09:33:19.863576483Z" level=info msg="Started container" PID=1636 containerID=4c32c80c458b3f70a26b5257c09d568bd64ccf7a157833322483f6a31d1a7512 description=kube-system/storage-provisioner/storage-provisioner id=2f969cd5-d62e-41f4-bf36-2eda7a0025a9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f03660d7f76a38c7abe15c94b1408fe200bc53b431bd70a29f5d34fb8dd778ee
	Oct 18 09:33:29 no-preload-886951 crio[652]: time="2025-10-18T09:33:29.629153828Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:33:29 no-preload-886951 crio[652]: time="2025-10-18T09:33:29.633227733Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:33:29 no-preload-886951 crio[652]: time="2025-10-18T09:33:29.633376701Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:33:29 no-preload-886951 crio[652]: time="2025-10-18T09:33:29.633447583Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:33:29 no-preload-886951 crio[652]: time="2025-10-18T09:33:29.636808031Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:33:29 no-preload-886951 crio[652]: time="2025-10-18T09:33:29.636967288Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:33:29 no-preload-886951 crio[652]: time="2025-10-18T09:33:29.637039672Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:33:29 no-preload-886951 crio[652]: time="2025-10-18T09:33:29.640708919Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:33:29 no-preload-886951 crio[652]: time="2025-10-18T09:33:29.640851734Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:33:29 no-preload-886951 crio[652]: time="2025-10-18T09:33:29.640927768Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:33:29 no-preload-886951 crio[652]: time="2025-10-18T09:33:29.644343255Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:33:29 no-preload-886951 crio[652]: time="2025-10-18T09:33:29.644483961Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:33:29 no-preload-886951 crio[652]: time="2025-10-18T09:33:29.644560332Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:33:29 no-preload-886951 crio[652]: time="2025-10-18T09:33:29.648341748Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:33:29 no-preload-886951 crio[652]: time="2025-10-18T09:33:29.648479525Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	4c32c80c458b3       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           24 seconds ago       Running             storage-provisioner         2                   f03660d7f76a3       storage-provisioner                          kube-system
	6fe9487b048d3       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           26 seconds ago       Exited              dashboard-metrics-scraper   2                   0cb015f6ee55a       dashboard-metrics-scraper-6ffb444bf9-p4dqv   kubernetes-dashboard
	3c99758e0b671       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   42 seconds ago       Running             kubernetes-dashboard        0                   41aee173dc755       kubernetes-dashboard-855c9754f9-smc6z        kubernetes-dashboard
	f505975793d59       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           54 seconds ago       Running             coredns                     1                   21822c335a3a2       coredns-66bc5c9577-l2rmq                     kube-system
	553b5a889b243       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           54 seconds ago       Running             busybox                     1                   361ac27ca2eb1       busybox                                      default
	306e8654ef730       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           54 seconds ago       Running             kube-proxy                  1                   9d211d1e0d8dc       kube-proxy-4gbs9                             kube-system
	94b1e528e52b0       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           54 seconds ago       Running             kindnet-cni                 1                   53f22fbcbd790       kindnet-l4xmh                                kube-system
	c1c8eeb295536       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           54 seconds ago       Exited              storage-provisioner         1                   f03660d7f76a3       storage-provisioner                          kube-system
	52a4f82d25803       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   68e4f0d4a62c6       kube-apiserver-no-preload-886951             kube-system
	0b366ec41824a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   ebc5dd91e8d69       kube-controller-manager-no-preload-886951    kube-system
	0cc3656fad24e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   eb4e23e4eb03b       kube-scheduler-no-preload-886951             kube-system
	8333b66cfc8ee       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   cf4650c5a2ebf       etcd-no-preload-886951                       kube-system
	
	
	==> coredns [f505975793d59951a10cdd493040db58400af6b79be4d56832adb20fa9f0f241] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39737 - 65000 "HINFO IN 8027062649216342014.8987925391557269663. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019779757s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-886951
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-886951
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820
	                    minikube.k8s.io/name=no-preload-886951
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_31_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:31:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-886951
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:33:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:33:18 +0000   Sat, 18 Oct 2025 09:31:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:33:18 +0000   Sat, 18 Oct 2025 09:31:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:33:18 +0000   Sat, 18 Oct 2025 09:31:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:33:18 +0000   Sat, 18 Oct 2025 09:32:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-886951
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                637092e3-28b4-4cc7-8dae-a07e30854491
	  Boot ID:                    65523ab2-bf15-4ba4-9086-da57024e96a9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-l2rmq                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     112s
	  kube-system                 etcd-no-preload-886951                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         116s
	  kube-system                 kindnet-l4xmh                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-no-preload-886951              250m (12%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-no-preload-886951     200m (10%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-4gbs9                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-no-preload-886951              100m (5%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-p4dqv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-smc6z         0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 109s                 kube-proxy       
	  Normal   Starting                 54s                  kube-proxy       
	  Warning  CgroupV1                 2m8s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m8s (x8 over 2m8s)  kubelet          Node no-preload-886951 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m8s (x8 over 2m8s)  kubelet          Node no-preload-886951 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m8s (x8 over 2m8s)  kubelet          Node no-preload-886951 status is now: NodeHasSufficientPID
	  Normal   Starting                 117s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 117s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  116s                 kubelet          Node no-preload-886951 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    116s                 kubelet          Node no-preload-886951 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     116s                 kubelet          Node no-preload-886951 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           113s                 node-controller  Node no-preload-886951 event: Registered Node no-preload-886951 in Controller
	  Normal   NodeReady                96s                  kubelet          Node no-preload-886951 status is now: NodeReady
	  Normal   Starting                 62s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s (x8 over 62s)    kubelet          Node no-preload-886951 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s (x8 over 62s)    kubelet          Node no-preload-886951 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s (x8 over 62s)    kubelet          Node no-preload-886951 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           52s                  node-controller  Node no-preload-886951 event: Registered Node no-preload-886951 in Controller
	
	
	==> dmesg <==
	[Oct18 09:11] overlayfs: idmapped layers are currently not supported
	[Oct18 09:12] overlayfs: idmapped layers are currently not supported
	[Oct18 09:13] overlayfs: idmapped layers are currently not supported
	[  +9.741593] overlayfs: idmapped layers are currently not supported
	[Oct18 09:14] overlayfs: idmapped layers are currently not supported
	[ +29.155522] overlayfs: idmapped layers are currently not supported
	[Oct18 09:15] overlayfs: idmapped layers are currently not supported
	[ +11.661984] kauditd_printk_skb: 8 callbacks suppressed
	[ +12.494912] overlayfs: idmapped layers are currently not supported
	[Oct18 09:16] overlayfs: idmapped layers are currently not supported
	[Oct18 09:18] overlayfs: idmapped layers are currently not supported
	[Oct18 09:20] overlayfs: idmapped layers are currently not supported
	[  +1.396393] overlayfs: idmapped layers are currently not supported
	[Oct18 09:22] overlayfs: idmapped layers are currently not supported
	[ +27.782946] overlayfs: idmapped layers are currently not supported
	[Oct18 09:25] overlayfs: idmapped layers are currently not supported
	[Oct18 09:26] overlayfs: idmapped layers are currently not supported
	[Oct18 09:27] overlayfs: idmapped layers are currently not supported
	[Oct18 09:28] overlayfs: idmapped layers are currently not supported
	[ +34.309418] overlayfs: idmapped layers are currently not supported
	[Oct18 09:30] overlayfs: idmapped layers are currently not supported
	[Oct18 09:31] overlayfs: idmapped layers are currently not supported
	[ +30.479187] overlayfs: idmapped layers are currently not supported
	[Oct18 09:32] overlayfs: idmapped layers are currently not supported
	[Oct18 09:33] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [8333b66cfc8ee31208ccfc044b7c62e87b44c35fc9b5f0567f504bfb9f50c42b] <==
	{"level":"warn","ts":"2025-10-18T09:32:46.606596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:32:46.627009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:32:46.651620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:32:46.680191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:32:46.697378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:32:46.710960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:32:46.727149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:32:46.747715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:32:46.764095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:32:46.801458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:32:46.814103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:32:46.836543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:32:46.855642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:32:46.868189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:32:46.891721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:32:46.905611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:32:46.926343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:32:46.941307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:32:46.965136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:32:46.988248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:32:47.006836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:32:47.027066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:32:47.043626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:32:47.061886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:32:47.161118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60354","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:33:44 up 11:16,  0 user,  load average: 2.84, 3.23, 2.68
	Linux no-preload-886951 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [94b1e528e52b0ea9b1d5837d3abfb4568b7db6f71a5853e75fe390f34c4c6734] <==
	I1018 09:32:49.439524       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:32:49.439754       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1018 09:32:49.439893       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:32:49.439905       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:32:49.439919       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:32:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:32:49.626872       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:32:49.626962       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:32:49.626998       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:32:49.627835       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 09:33:19.627152       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 09:33:19.627368       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 09:33:19.628625       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 09:33:19.628806       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1018 09:33:21.228125       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:33:21.228255       1 metrics.go:72] Registering metrics
	I1018 09:33:21.228363       1 controller.go:711] "Syncing nftables rules"
	I1018 09:33:29.628757       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 09:33:29.628897       1 main.go:301] handling current node
	I1018 09:33:39.635007       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 09:33:39.635043       1 main.go:301] handling current node
	
	
	==> kube-apiserver [52a4f82d25803437e2b4f9a5a0979d2eddfe52226bc7144054185dd64cbed59e] <==
	I1018 09:32:47.924740       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1018 09:32:47.924764       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 09:32:47.924863       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 09:32:47.925259       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:32:47.925475       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 09:32:47.930842       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 09:32:47.931300       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1018 09:32:47.931335       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 09:32:47.948501       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1018 09:32:47.954032       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 09:32:47.960712       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:32:47.965784       1 cache.go:39] Caches are synced for autoregister controller
	I1018 09:32:48.008237       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 09:32:48.662906       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 09:32:48.749451       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:32:48.764103       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 09:32:48.969966       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 09:32:49.307530       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:32:49.425171       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:32:49.802071       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.17.90"}
	I1018 09:32:49.850115       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.54.51"}
	I1018 09:32:52.699389       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 09:32:52.802538       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 09:32:52.899416       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 09:32:52.899540       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [0b366ec41824a247d56af9aadad985448d0c26d9381e2243c07a327589c034da] <==
	I1018 09:32:52.311914       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 09:32:52.313020       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 09:32:52.313168       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 09:32:52.313782       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:32:52.314782       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 09:32:52.315736       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:32:52.316219       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 09:32:52.316566       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 09:32:52.322108       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 09:32:52.322883       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 09:32:52.326927       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 09:32:52.335694       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1018 09:32:52.338982       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 09:32:52.343758       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 09:32:52.343767       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 09:32:52.343783       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 09:32:52.349011       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 09:32:52.349118       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:32:52.350503       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 09:32:52.350598       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 09:32:52.350715       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 09:32:52.350757       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 09:32:52.350788       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 09:32:52.352714       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 09:32:52.356017       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	
	
	==> kube-proxy [306e8654ef730e9d2a05a0f8eb73d98594c93f8ae0707c0c01c6dafb942fbf13] <==
	I1018 09:32:49.659654       1 server_linux.go:53] "Using iptables proxy"
	I1018 09:32:49.857770       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 09:32:49.971776       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:32:49.971916       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1018 09:32:49.972038       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:32:50.071693       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:32:50.071767       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:32:50.078874       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:32:50.079231       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:32:50.079248       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:32:50.081094       1 config.go:200] "Starting service config controller"
	I1018 09:32:50.081121       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:32:50.081141       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:32:50.081145       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:32:50.081156       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:32:50.081160       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:32:50.082014       1 config.go:309] "Starting node config controller"
	I1018 09:32:50.082037       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:32:50.082052       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 09:32:50.183373       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 09:32:50.183782       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 09:32:50.183820       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0cc3656fad24ee9a111ade774682a71330029b5e0750b4e80a331f7222647630] <==
	I1018 09:32:45.870557       1 serving.go:386] Generated self-signed cert in-memory
	W1018 09:32:47.800058       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 09:32:47.800093       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 09:32:47.800102       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 09:32:47.800109       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 09:32:47.957582       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 09:32:47.957619       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:32:47.960299       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 09:32:47.960450       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:32:47.960470       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:32:47.960503       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 09:32:48.061823       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:32:52 no-preload-886951 kubelet[770]: I1018 09:32:52.960204     770 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f7c54360-20fd-4379-99d3-99b644351635-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-p4dqv\" (UID: \"f7c54360-20fd-4379-99d3-99b644351635\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p4dqv"
	Oct 18 09:32:53 no-preload-886951 kubelet[770]: W1018 09:32:53.140882     770 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/53265fd5269c62ddd549523e48a4341815211a2547f2f5f74191b59029ce1244/crio-0cb015f6ee55a7614c7408b749cef9306a35bfa503f9eb6c280d3246a8477677 WatchSource:0}: Error finding container 0cb015f6ee55a7614c7408b749cef9306a35bfa503f9eb6c280d3246a8477677: Status 404 returned error can't find the container with id 0cb015f6ee55a7614c7408b749cef9306a35bfa503f9eb6c280d3246a8477677
	Oct 18 09:32:53 no-preload-886951 kubelet[770]: W1018 09:32:53.182422     770 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/53265fd5269c62ddd549523e48a4341815211a2547f2f5f74191b59029ce1244/crio-41aee173dc75539e69e498c7b0cb3e857ac90f6f342bc920b71b7798252b487e WatchSource:0}: Error finding container 41aee173dc75539e69e498c7b0cb3e857ac90f6f342bc920b71b7798252b487e: Status 404 returned error can't find the container with id 41aee173dc75539e69e498c7b0cb3e857ac90f6f342bc920b71b7798252b487e
	Oct 18 09:32:54 no-preload-886951 kubelet[770]: I1018 09:32:54.919238     770 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 18 09:32:57 no-preload-886951 kubelet[770]: I1018 09:32:57.722359     770 scope.go:117] "RemoveContainer" containerID="985a7e6b5de44288deaa53c017ccce7f8d4b4d7c9254ac57bd60ad05381028ff"
	Oct 18 09:32:58 no-preload-886951 kubelet[770]: I1018 09:32:58.727580     770 scope.go:117] "RemoveContainer" containerID="985a7e6b5de44288deaa53c017ccce7f8d4b4d7c9254ac57bd60ad05381028ff"
	Oct 18 09:32:58 no-preload-886951 kubelet[770]: I1018 09:32:58.727953     770 scope.go:117] "RemoveContainer" containerID="438ebca390f66b7bca0b8d6453e36b15a399db3c0c936febc97e303e93c6215c"
	Oct 18 09:32:58 no-preload-886951 kubelet[770]: E1018 09:32:58.728118     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-p4dqv_kubernetes-dashboard(f7c54360-20fd-4379-99d3-99b644351635)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p4dqv" podUID="f7c54360-20fd-4379-99d3-99b644351635"
	Oct 18 09:32:59 no-preload-886951 kubelet[770]: I1018 09:32:59.731921     770 scope.go:117] "RemoveContainer" containerID="438ebca390f66b7bca0b8d6453e36b15a399db3c0c936febc97e303e93c6215c"
	Oct 18 09:32:59 no-preload-886951 kubelet[770]: E1018 09:32:59.732044     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-p4dqv_kubernetes-dashboard(f7c54360-20fd-4379-99d3-99b644351635)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p4dqv" podUID="f7c54360-20fd-4379-99d3-99b644351635"
	Oct 18 09:33:03 no-preload-886951 kubelet[770]: I1018 09:33:03.090520     770 scope.go:117] "RemoveContainer" containerID="438ebca390f66b7bca0b8d6453e36b15a399db3c0c936febc97e303e93c6215c"
	Oct 18 09:33:03 no-preload-886951 kubelet[770]: E1018 09:33:03.090704     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-p4dqv_kubernetes-dashboard(f7c54360-20fd-4379-99d3-99b644351635)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p4dqv" podUID="f7c54360-20fd-4379-99d3-99b644351635"
	Oct 18 09:33:17 no-preload-886951 kubelet[770]: I1018 09:33:17.604702     770 scope.go:117] "RemoveContainer" containerID="438ebca390f66b7bca0b8d6453e36b15a399db3c0c936febc97e303e93c6215c"
	Oct 18 09:33:17 no-preload-886951 kubelet[770]: I1018 09:33:17.784404     770 scope.go:117] "RemoveContainer" containerID="438ebca390f66b7bca0b8d6453e36b15a399db3c0c936febc97e303e93c6215c"
	Oct 18 09:33:17 no-preload-886951 kubelet[770]: I1018 09:33:17.785047     770 scope.go:117] "RemoveContainer" containerID="6fe9487b048d325f2717ac2cb4ee4019221e040bc84f42632e57d631c864c681"
	Oct 18 09:33:17 no-preload-886951 kubelet[770]: E1018 09:33:17.785366     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-p4dqv_kubernetes-dashboard(f7c54360-20fd-4379-99d3-99b644351635)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p4dqv" podUID="f7c54360-20fd-4379-99d3-99b644351635"
	Oct 18 09:33:17 no-preload-886951 kubelet[770]: I1018 09:33:17.832666     770 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-smc6z" podStartSLOduration=17.210757699 podStartE2EDuration="25.832649111s" podCreationTimestamp="2025-10-18 09:32:52 +0000 UTC" firstStartedPulling="2025-10-18 09:32:53.190738587 +0000 UTC m=+10.798534767" lastFinishedPulling="2025-10-18 09:33:01.81263 +0000 UTC m=+19.420426179" observedRunningTime="2025-10-18 09:33:02.754540139 +0000 UTC m=+20.362336327" watchObservedRunningTime="2025-10-18 09:33:17.832649111 +0000 UTC m=+35.440445299"
	Oct 18 09:33:19 no-preload-886951 kubelet[770]: I1018 09:33:19.793781     770 scope.go:117] "RemoveContainer" containerID="c1c8eeb2955365fe9513d621ef316f0153e8d1875eecd9d5277bde4191548620"
	Oct 18 09:33:23 no-preload-886951 kubelet[770]: I1018 09:33:23.090602     770 scope.go:117] "RemoveContainer" containerID="6fe9487b048d325f2717ac2cb4ee4019221e040bc84f42632e57d631c864c681"
	Oct 18 09:33:23 no-preload-886951 kubelet[770]: E1018 09:33:23.090761     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-p4dqv_kubernetes-dashboard(f7c54360-20fd-4379-99d3-99b644351635)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p4dqv" podUID="f7c54360-20fd-4379-99d3-99b644351635"
	Oct 18 09:33:35 no-preload-886951 kubelet[770]: I1018 09:33:35.605516     770 scope.go:117] "RemoveContainer" containerID="6fe9487b048d325f2717ac2cb4ee4019221e040bc84f42632e57d631c864c681"
	Oct 18 09:33:35 no-preload-886951 kubelet[770]: E1018 09:33:35.605731     770 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-p4dqv_kubernetes-dashboard(f7c54360-20fd-4379-99d3-99b644351635)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p4dqv" podUID="f7c54360-20fd-4379-99d3-99b644351635"
	Oct 18 09:33:39 no-preload-886951 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 09:33:39 no-preload-886951 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 09:33:39 no-preload-886951 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [3c99758e0b671868c66b76b5f6341a5c4d2743886ca97fd1ad90d31b840aeea0] <==
	2025/10/18 09:33:01 Using namespace: kubernetes-dashboard
	2025/10/18 09:33:01 Using in-cluster config to connect to apiserver
	2025/10/18 09:33:01 Using secret token for csrf signing
	2025/10/18 09:33:01 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 09:33:01 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 09:33:01 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 09:33:01 Generating JWE encryption key
	2025/10/18 09:33:01 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 09:33:01 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 09:33:02 Initializing JWE encryption key from synchronized object
	2025/10/18 09:33:02 Creating in-cluster Sidecar client
	2025/10/18 09:33:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 09:33:02 Serving insecurely on HTTP port: 9090
	2025/10/18 09:33:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 09:33:01 Starting overwatch
	
	
	==> storage-provisioner [4c32c80c458b3f70a26b5257c09d568bd64ccf7a157833322483f6a31d1a7512] <==
	I1018 09:33:19.869232       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 09:33:19.892042       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 09:33:19.892097       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 09:33:19.897700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:33:23.353232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:33:27.613907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:33:31.211828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:33:34.265018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:33:37.286714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:33:37.291495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:33:37.291639       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 09:33:37.291811       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-886951_50bbf680-6625-40ee-aea5-a02aa1d95183!
	I1018 09:33:37.292758       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a9182a57-e5d4-477c-a0cb-d3046b198831", APIVersion:"v1", ResourceVersion:"676", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-886951_50bbf680-6625-40ee-aea5-a02aa1d95183 became leader
	W1018 09:33:37.299998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:33:37.303132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:33:37.392650       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-886951_50bbf680-6625-40ee-aea5-a02aa1d95183!
	W1018 09:33:39.306196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:33:39.311092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:33:41.314220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:33:41.319186       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:33:43.321720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:33:43.330456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c1c8eeb2955365fe9513d621ef316f0153e8d1875eecd9d5277bde4191548620] <==
	I1018 09:32:49.499221       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 09:33:19.503185       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-886951 -n no-preload-886951
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-886951 -n no-preload-886951: exit status 2 (359.335719ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-886951 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (8.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-559379 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-559379 --alsologtostderr -v=1: exit status 80 (2.609133881s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-559379 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:34:09.574034 1476709 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:34:09.574257 1476709 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:34:09.574283 1476709 out.go:374] Setting ErrFile to fd 2...
	I1018 09:34:09.574311 1476709 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:34:09.574609 1476709 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 09:34:09.574965 1476709 out.go:368] Setting JSON to false
	I1018 09:34:09.575015 1476709 mustload.go:65] Loading cluster: embed-certs-559379
	I1018 09:34:09.575429 1476709 config.go:182] Loaded profile config "embed-certs-559379": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:34:09.575938 1476709 cli_runner.go:164] Run: docker container inspect embed-certs-559379 --format={{.State.Status}}
	I1018 09:34:09.601610 1476709 host.go:66] Checking if "embed-certs-559379" exists ...
	I1018 09:34:09.601968 1476709 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:34:09.707671 1476709 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-18 09:34:09.694352091 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:34:09.708443 1476709 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-559379 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1018 09:34:09.711939 1476709 out.go:179] * Pausing node embed-certs-559379 ... 
	I1018 09:34:09.714847 1476709 host.go:66] Checking if "embed-certs-559379" exists ...
	I1018 09:34:09.715167 1476709 ssh_runner.go:195] Run: systemctl --version
	I1018 09:34:09.715212 1476709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-559379
	I1018 09:34:09.741453 1476709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34896 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/embed-certs-559379/id_rsa Username:docker}
	I1018 09:34:09.850565 1476709 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:34:09.871011 1476709 pause.go:52] kubelet running: true
	I1018 09:34:09.871081 1476709 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:34:10.179603 1476709 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:34:10.179717 1476709 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:34:10.274874 1476709 cri.go:89] found id: "5de25f73c22dded133ad60394a22310d0c8b5f7494398d4c19b2a0b733701d76"
	I1018 09:34:10.274919 1476709 cri.go:89] found id: "0024b85e5ef70b9bf4ed4dae2ee9734cd4169e59884bd9f20cf0fa1d53ab8b4d"
	I1018 09:34:10.274925 1476709 cri.go:89] found id: "f33068c7b92cd4cb6c77a59c4716cbd106f6ac2f31c1ec7071a66ecb90a2813e"
	I1018 09:34:10.274929 1476709 cri.go:89] found id: "c1528bdaee222f9277110c1d5151cc9bfb3371a213419bb2ef053388848c0a56"
	I1018 09:34:10.274932 1476709 cri.go:89] found id: "90117d3668eec91cac997ca9f7c2efbc2a28365287180d51f10287bfcca9e046"
	I1018 09:34:10.274936 1476709 cri.go:89] found id: "107f423c474214a77b70bea579d8693f96941573e52099aab36ca04cad80b9fb"
	I1018 09:34:10.274939 1476709 cri.go:89] found id: "9e4bebc346e34245095acfdc99e4bf27d586ba1008354824cc3842710f552d3d"
	I1018 09:34:10.274942 1476709 cri.go:89] found id: "1fa42435e829fa1ff7a0af9be9dc7035e7cc16ae52106466d057fafcbaf6e9bb"
	I1018 09:34:10.274945 1476709 cri.go:89] found id: "836750ba877589f7642d95bcc7eaea0db209e4198f52173d3d62e2a5392defad"
	I1018 09:34:10.274953 1476709 cri.go:89] found id: "684cd6eff48e43cc623fa04e58de4efa0f8efaa9dc177f98042f63ff74357b47"
	I1018 09:34:10.274961 1476709 cri.go:89] found id: "0e3ede05f52a881ea5d8c2a1b82dd79a395d0f564a7b3fd96fd62e991cc448db"
	I1018 09:34:10.274964 1476709 cri.go:89] found id: ""
	I1018 09:34:10.275027 1476709 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:34:10.286685 1476709 retry.go:31] will retry after 180.762208ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:34:10Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:34:10.468067 1476709 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:34:10.483404 1476709 pause.go:52] kubelet running: false
	I1018 09:34:10.483520 1476709 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:34:10.687397 1476709 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:34:10.687531 1476709 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:34:10.773474 1476709 cri.go:89] found id: "5de25f73c22dded133ad60394a22310d0c8b5f7494398d4c19b2a0b733701d76"
	I1018 09:34:10.773543 1476709 cri.go:89] found id: "0024b85e5ef70b9bf4ed4dae2ee9734cd4169e59884bd9f20cf0fa1d53ab8b4d"
	I1018 09:34:10.773563 1476709 cri.go:89] found id: "f33068c7b92cd4cb6c77a59c4716cbd106f6ac2f31c1ec7071a66ecb90a2813e"
	I1018 09:34:10.773580 1476709 cri.go:89] found id: "c1528bdaee222f9277110c1d5151cc9bfb3371a213419bb2ef053388848c0a56"
	I1018 09:34:10.773612 1476709 cri.go:89] found id: "90117d3668eec91cac997ca9f7c2efbc2a28365287180d51f10287bfcca9e046"
	I1018 09:34:10.773633 1476709 cri.go:89] found id: "107f423c474214a77b70bea579d8693f96941573e52099aab36ca04cad80b9fb"
	I1018 09:34:10.773650 1476709 cri.go:89] found id: "9e4bebc346e34245095acfdc99e4bf27d586ba1008354824cc3842710f552d3d"
	I1018 09:34:10.773666 1476709 cri.go:89] found id: "1fa42435e829fa1ff7a0af9be9dc7035e7cc16ae52106466d057fafcbaf6e9bb"
	I1018 09:34:10.773697 1476709 cri.go:89] found id: "836750ba877589f7642d95bcc7eaea0db209e4198f52173d3d62e2a5392defad"
	I1018 09:34:10.773722 1476709 cri.go:89] found id: "684cd6eff48e43cc623fa04e58de4efa0f8efaa9dc177f98042f63ff74357b47"
	I1018 09:34:10.773741 1476709 cri.go:89] found id: "0e3ede05f52a881ea5d8c2a1b82dd79a395d0f564a7b3fd96fd62e991cc448db"
	I1018 09:34:10.773759 1476709 cri.go:89] found id: ""
	I1018 09:34:10.773833 1476709 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:34:10.793784 1476709 retry.go:31] will retry after 194.444554ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:34:10Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:34:10.989202 1476709 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:34:11.009141 1476709 pause.go:52] kubelet running: false
	I1018 09:34:11.009220 1476709 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:34:11.244111 1476709 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:34:11.244208 1476709 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:34:11.340967 1476709 cri.go:89] found id: "5de25f73c22dded133ad60394a22310d0c8b5f7494398d4c19b2a0b733701d76"
	I1018 09:34:11.340991 1476709 cri.go:89] found id: "0024b85e5ef70b9bf4ed4dae2ee9734cd4169e59884bd9f20cf0fa1d53ab8b4d"
	I1018 09:34:11.340996 1476709 cri.go:89] found id: "f33068c7b92cd4cb6c77a59c4716cbd106f6ac2f31c1ec7071a66ecb90a2813e"
	I1018 09:34:11.341000 1476709 cri.go:89] found id: "c1528bdaee222f9277110c1d5151cc9bfb3371a213419bb2ef053388848c0a56"
	I1018 09:34:11.341003 1476709 cri.go:89] found id: "90117d3668eec91cac997ca9f7c2efbc2a28365287180d51f10287bfcca9e046"
	I1018 09:34:11.341006 1476709 cri.go:89] found id: "107f423c474214a77b70bea579d8693f96941573e52099aab36ca04cad80b9fb"
	I1018 09:34:11.341009 1476709 cri.go:89] found id: "9e4bebc346e34245095acfdc99e4bf27d586ba1008354824cc3842710f552d3d"
	I1018 09:34:11.341012 1476709 cri.go:89] found id: "1fa42435e829fa1ff7a0af9be9dc7035e7cc16ae52106466d057fafcbaf6e9bb"
	I1018 09:34:11.341016 1476709 cri.go:89] found id: "836750ba877589f7642d95bcc7eaea0db209e4198f52173d3d62e2a5392defad"
	I1018 09:34:11.341022 1476709 cri.go:89] found id: "684cd6eff48e43cc623fa04e58de4efa0f8efaa9dc177f98042f63ff74357b47"
	I1018 09:34:11.341025 1476709 cri.go:89] found id: "0e3ede05f52a881ea5d8c2a1b82dd79a395d0f564a7b3fd96fd62e991cc448db"
	I1018 09:34:11.341038 1476709 cri.go:89] found id: ""
	I1018 09:34:11.341086 1476709 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:34:11.352862 1476709 retry.go:31] will retry after 389.932963ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:34:11Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:34:11.743370 1476709 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:34:11.757365 1476709 pause.go:52] kubelet running: false
	I1018 09:34:11.757454 1476709 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:34:11.982599 1476709 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:34:11.982708 1476709 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:34:12.077866 1476709 cri.go:89] found id: "5de25f73c22dded133ad60394a22310d0c8b5f7494398d4c19b2a0b733701d76"
	I1018 09:34:12.077916 1476709 cri.go:89] found id: "0024b85e5ef70b9bf4ed4dae2ee9734cd4169e59884bd9f20cf0fa1d53ab8b4d"
	I1018 09:34:12.077922 1476709 cri.go:89] found id: "f33068c7b92cd4cb6c77a59c4716cbd106f6ac2f31c1ec7071a66ecb90a2813e"
	I1018 09:34:12.077927 1476709 cri.go:89] found id: "c1528bdaee222f9277110c1d5151cc9bfb3371a213419bb2ef053388848c0a56"
	I1018 09:34:12.077931 1476709 cri.go:89] found id: "90117d3668eec91cac997ca9f7c2efbc2a28365287180d51f10287bfcca9e046"
	I1018 09:34:12.077934 1476709 cri.go:89] found id: "107f423c474214a77b70bea579d8693f96941573e52099aab36ca04cad80b9fb"
	I1018 09:34:12.077939 1476709 cri.go:89] found id: "9e4bebc346e34245095acfdc99e4bf27d586ba1008354824cc3842710f552d3d"
	I1018 09:34:12.077946 1476709 cri.go:89] found id: "1fa42435e829fa1ff7a0af9be9dc7035e7cc16ae52106466d057fafcbaf6e9bb"
	I1018 09:34:12.077949 1476709 cri.go:89] found id: "836750ba877589f7642d95bcc7eaea0db209e4198f52173d3d62e2a5392defad"
	I1018 09:34:12.077963 1476709 cri.go:89] found id: "684cd6eff48e43cc623fa04e58de4efa0f8efaa9dc177f98042f63ff74357b47"
	I1018 09:34:12.077972 1476709 cri.go:89] found id: "0e3ede05f52a881ea5d8c2a1b82dd79a395d0f564a7b3fd96fd62e991cc448db"
	I1018 09:34:12.077975 1476709 cri.go:89] found id: ""
	I1018 09:34:12.078025 1476709 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:34:12.095036 1476709 out.go:203] 
	W1018 09:34:12.098256 1476709 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:34:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:34:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:34:12.098286 1476709 out.go:285] * 
	* 
	W1018 09:34:12.107757 1476709 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:34:12.113117 1476709 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-559379 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-559379
helpers_test.go:243: (dbg) docker inspect embed-certs-559379:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "28d5892e22acbb4aee9ab8966787a91744522fa04b863c7570f04701f5c19fa0",
	        "Created": "2025-10-18T09:31:14.969231495Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1471190,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:33:04.511564204Z",
	            "FinishedAt": "2025-10-18T09:33:03.51320584Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/28d5892e22acbb4aee9ab8966787a91744522fa04b863c7570f04701f5c19fa0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/28d5892e22acbb4aee9ab8966787a91744522fa04b863c7570f04701f5c19fa0/hostname",
	        "HostsPath": "/var/lib/docker/containers/28d5892e22acbb4aee9ab8966787a91744522fa04b863c7570f04701f5c19fa0/hosts",
	        "LogPath": "/var/lib/docker/containers/28d5892e22acbb4aee9ab8966787a91744522fa04b863c7570f04701f5c19fa0/28d5892e22acbb4aee9ab8966787a91744522fa04b863c7570f04701f5c19fa0-json.log",
	        "Name": "/embed-certs-559379",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-559379:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-559379",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "28d5892e22acbb4aee9ab8966787a91744522fa04b863c7570f04701f5c19fa0",
	                "LowerDir": "/var/lib/docker/overlay2/0e2f9d97ea0ae68fd5ebc6259193de39e6a86f87d5364288929a3a18dfb61772-init/diff:/var/lib/docker/overlay2/60519750c2737db0dfdb37adf468f6414129c2b5dcb7218ea59afbc63bfd537f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0e2f9d97ea0ae68fd5ebc6259193de39e6a86f87d5364288929a3a18dfb61772/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0e2f9d97ea0ae68fd5ebc6259193de39e6a86f87d5364288929a3a18dfb61772/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0e2f9d97ea0ae68fd5ebc6259193de39e6a86f87d5364288929a3a18dfb61772/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-559379",
	                "Source": "/var/lib/docker/volumes/embed-certs-559379/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-559379",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-559379",
	                "name.minikube.sigs.k8s.io": "embed-certs-559379",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8e5475df9cd643e27d5a7ade26ee37641474b730114d47e60cab35edb74a60e8",
	            "SandboxKey": "/var/run/docker/netns/8e5475df9cd6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34896"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34897"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34900"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34898"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34899"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-559379": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:45:4a:d4:27:3b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c6157e554c859f57a7166278cd1d0343828367a13a26ff7877c8ce4c80e272af",
	                    "EndpointID": "7811e1d2d5d28dcfb90525e53fae320eca11d7206bb5416986ebf7a0d7cdcdf6",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-559379",
	                        "28d5892e22ac"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-559379 -n embed-certs-559379
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-559379 -n embed-certs-559379: exit status 2 (431.407826ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-559379 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-559379 logs -n 25: (1.583749547s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-136598 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-136598       │ jenkins │ v1.37.0 │ 18 Oct 25 09:29 UTC │ 18 Oct 25 09:30 UTC │
	│ start   │ -p cert-expiration-854768 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-854768       │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │ 18 Oct 25 09:30 UTC │
	│ delete  │ -p cert-expiration-854768                                                                                                                                                                                                                     │ cert-expiration-854768       │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │ 18 Oct 25 09:30 UTC │
	│ image   │ old-k8s-version-136598 image list --format=json                                                                                                                                                                                               │ old-k8s-version-136598       │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │ 18 Oct 25 09:30 UTC │
	│ pause   │ -p old-k8s-version-136598 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-136598       │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │                     │
	│ start   │ -p no-preload-886951 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │ 18 Oct 25 09:32 UTC │
	│ delete  │ -p old-k8s-version-136598                                                                                                                                                                                                                     │ old-k8s-version-136598       │ jenkins │ v1.37.0 │ 18 Oct 25 09:31 UTC │ 18 Oct 25 09:31 UTC │
	│ delete  │ -p old-k8s-version-136598                                                                                                                                                                                                                     │ old-k8s-version-136598       │ jenkins │ v1.37.0 │ 18 Oct 25 09:31 UTC │ 18 Oct 25 09:31 UTC │
	│ start   │ -p embed-certs-559379 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:31 UTC │ 18 Oct 25 09:32 UTC │
	│ addons  │ enable metrics-server -p no-preload-886951 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │                     │
	│ stop    │ -p no-preload-886951 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │ 18 Oct 25 09:32 UTC │
	│ addons  │ enable dashboard -p no-preload-886951 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │ 18 Oct 25 09:32 UTC │
	│ start   │ -p no-preload-886951 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │ 18 Oct 25 09:33 UTC │
	│ addons  │ enable metrics-server -p embed-certs-559379 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │                     │
	│ stop    │ -p embed-certs-559379 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │ 18 Oct 25 09:33 UTC │
	│ addons  │ enable dashboard -p embed-certs-559379 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:33 UTC │
	│ start   │ -p embed-certs-559379 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:33 UTC │
	│ image   │ no-preload-886951 image list --format=json                                                                                                                                                                                                    │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:33 UTC │
	│ pause   │ -p no-preload-886951 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │                     │
	│ delete  │ -p no-preload-886951                                                                                                                                                                                                                          │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:33 UTC │
	│ delete  │ -p no-preload-886951                                                                                                                                                                                                                          │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:33 UTC │
	│ delete  │ -p disable-driver-mounts-877810                                                                                                                                                                                                               │ disable-driver-mounts-877810 │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:33 UTC │
	│ start   │ -p default-k8s-diff-port-593480 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-593480 │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │                     │
	│ image   │ embed-certs-559379 image list --format=json                                                                                                                                                                                                   │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │ 18 Oct 25 09:34 UTC │
	│ pause   │ -p embed-certs-559379 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:33:48
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:33:48.478205 1474687 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:33:48.478322 1474687 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:33:48.478331 1474687 out.go:374] Setting ErrFile to fd 2...
	I1018 09:33:48.478336 1474687 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:33:48.478603 1474687 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 09:33:48.479022 1474687 out.go:368] Setting JSON to false
	I1018 09:33:48.480065 1474687 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":40576,"bootTime":1760739453,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1018 09:33:48.480135 1474687 start.go:141] virtualization:  
	I1018 09:33:48.483806 1474687 out.go:179] * [default-k8s-diff-port-593480] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 09:33:48.487762 1474687 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 09:33:48.487902 1474687 notify.go:220] Checking for updates...
	I1018 09:33:48.493914 1474687 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:33:48.496916 1474687 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 09:33:48.500013 1474687 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-1274243/.minikube
	I1018 09:33:48.502909 1474687 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 09:33:48.505797 1474687 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:33:48.509406 1474687 config.go:182] Loaded profile config "embed-certs-559379": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:33:48.509526 1474687 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:33:48.535975 1474687 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 09:33:48.536112 1474687 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:33:48.592438 1474687 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 09:33:48.582553625 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:33:48.592547 1474687 docker.go:318] overlay module found
	I1018 09:33:48.595664 1474687 out.go:179] * Using the docker driver based on user configuration
	I1018 09:33:48.598528 1474687 start.go:305] selected driver: docker
	I1018 09:33:48.598550 1474687 start.go:925] validating driver "docker" against <nil>
	I1018 09:33:48.598578 1474687 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:33:48.599294 1474687 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:33:48.656625 1474687 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 09:33:48.647322249 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:33:48.656786 1474687 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 09:33:48.657044 1474687 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:33:48.659909 1474687 out.go:179] * Using Docker driver with root privileges
	I1018 09:33:48.662793 1474687 cni.go:84] Creating CNI manager for ""
	I1018 09:33:48.662856 1474687 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:33:48.662871 1474687 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 09:33:48.662951 1474687 start.go:349] cluster config:
	{Name:default-k8s-diff-port-593480 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-593480 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:33:48.665911 1474687 out.go:179] * Starting "default-k8s-diff-port-593480" primary control-plane node in "default-k8s-diff-port-593480" cluster
	I1018 09:33:48.668758 1474687 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:33:48.671805 1474687 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:33:48.674733 1474687 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:33:48.674803 1474687 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:33:48.674840 1474687 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 09:33:48.674853 1474687 cache.go:58] Caching tarball of preloaded images
	I1018 09:33:48.674936 1474687 preload.go:233] Found /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 09:33:48.674950 1474687 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 09:33:48.675048 1474687 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/config.json ...
	I1018 09:33:48.675075 1474687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/config.json: {Name:mkc696d21d44298ffc51a77d02ca52630b2fec37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:33:48.692499 1474687 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:33:48.692522 1474687 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:33:48.692533 1474687 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:33:48.692555 1474687 start.go:360] acquireMachinesLock for default-k8s-diff-port-593480: {Name:mk139126e1ddb766657a5fd510c1f904e5550412 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:33:48.692656 1474687 start.go:364] duration metric: took 80.983µs to acquireMachinesLock for "default-k8s-diff-port-593480"
	I1018 09:33:48.692687 1474687 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-593480 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-593480 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:33:48.692759 1474687 start.go:125] createHost starting for "" (driver="docker")
	W1018 09:33:45.278635 1471064 pod_ready.go:104] pod "coredns-66bc5c9577-t9blq" is not "Ready", error: <nil>
	W1018 09:33:47.762342 1471064 pod_ready.go:104] pod "coredns-66bc5c9577-t9blq" is not "Ready", error: <nil>
	I1018 09:33:48.699035 1474687 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 09:33:48.699272 1474687 start.go:159] libmachine.API.Create for "default-k8s-diff-port-593480" (driver="docker")
	I1018 09:33:48.699314 1474687 client.go:168] LocalClient.Create starting
	I1018 09:33:48.699382 1474687 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem
	I1018 09:33:48.699414 1474687 main.go:141] libmachine: Decoding PEM data...
	I1018 09:33:48.699432 1474687 main.go:141] libmachine: Parsing certificate...
	I1018 09:33:48.699577 1474687 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem
	I1018 09:33:48.699624 1474687 main.go:141] libmachine: Decoding PEM data...
	I1018 09:33:48.699641 1474687 main.go:141] libmachine: Parsing certificate...
	I1018 09:33:48.700044 1474687 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-593480 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 09:33:48.718591 1474687 cli_runner.go:211] docker network inspect default-k8s-diff-port-593480 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 09:33:48.718680 1474687 network_create.go:284] running [docker network inspect default-k8s-diff-port-593480] to gather additional debugging logs...
	I1018 09:33:48.718703 1474687 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-593480
	W1018 09:33:48.735448 1474687 cli_runner.go:211] docker network inspect default-k8s-diff-port-593480 returned with exit code 1
	I1018 09:33:48.735478 1474687 network_create.go:287] error running [docker network inspect default-k8s-diff-port-593480]: docker network inspect default-k8s-diff-port-593480: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-593480 not found
	I1018 09:33:48.735491 1474687 network_create.go:289] output of [docker network inspect default-k8s-diff-port-593480]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-593480 not found
	
	** /stderr **
	I1018 09:33:48.735607 1474687 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:33:48.754166 1474687 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-521f8f572997 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3a:7e:e5:c0:67:29} reservation:<nil>}
	I1018 09:33:48.754540 1474687 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0b81e76c4e4f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ae:bf:e8:f1:22:c8} reservation:<nil>}
	I1018 09:33:48.754882 1474687 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-41e3e621447e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:62:fc:17:ff:cd:8c} reservation:<nil>}
	I1018 09:33:48.755165 1474687 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c6157e554c85 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:e6:ce:1a:e4:53:ce} reservation:<nil>}
	I1018 09:33:48.755599 1474687 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d5540}
	I1018 09:33:48.755621 1474687 network_create.go:124] attempt to create docker network default-k8s-diff-port-593480 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1018 09:33:48.755685 1474687 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-593480 default-k8s-diff-port-593480
	I1018 09:33:48.816762 1474687 network_create.go:108] docker network default-k8s-diff-port-593480 192.168.85.0/24 created
	I1018 09:33:48.816794 1474687 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-593480" container
	I1018 09:33:48.816865 1474687 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 09:33:48.832427 1474687 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-593480 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-593480 --label created_by.minikube.sigs.k8s.io=true
	I1018 09:33:48.849252 1474687 oci.go:103] Successfully created a docker volume default-k8s-diff-port-593480
	I1018 09:33:48.849348 1474687 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-593480-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-593480 --entrypoint /usr/bin/test -v default-k8s-diff-port-593480:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 09:33:49.451205 1474687 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-593480
	I1018 09:33:49.451253 1474687 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:33:49.451273 1474687 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 09:33:49.451359 1474687 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-593480:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1018 09:33:50.262152 1471064 pod_ready.go:104] pod "coredns-66bc5c9577-t9blq" is not "Ready", error: <nil>
	W1018 09:33:52.761580 1471064 pod_ready.go:104] pod "coredns-66bc5c9577-t9blq" is not "Ready", error: <nil>
	W1018 09:33:54.763058 1471064 pod_ready.go:104] pod "coredns-66bc5c9577-t9blq" is not "Ready", error: <nil>
	I1018 09:33:56.261132 1471064 pod_ready.go:94] pod "coredns-66bc5c9577-t9blq" is "Ready"
	I1018 09:33:56.261245 1471064 pod_ready.go:86] duration metric: took 36.506018592s for pod "coredns-66bc5c9577-t9blq" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:56.264017 1471064 pod_ready.go:83] waiting for pod "etcd-embed-certs-559379" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:56.270004 1471064 pod_ready.go:94] pod "etcd-embed-certs-559379" is "Ready"
	I1018 09:33:56.270032 1471064 pod_ready.go:86] duration metric: took 5.985203ms for pod "etcd-embed-certs-559379" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:56.272546 1471064 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-559379" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:56.279762 1471064 pod_ready.go:94] pod "kube-apiserver-embed-certs-559379" is "Ready"
	I1018 09:33:56.279879 1471064 pod_ready.go:86] duration metric: took 7.308145ms for pod "kube-apiserver-embed-certs-559379" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:56.283320 1471064 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-559379" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:56.460143 1471064 pod_ready.go:94] pod "kube-controller-manager-embed-certs-559379" is "Ready"
	I1018 09:33:56.460218 1471064 pod_ready.go:86] duration metric: took 176.872545ms for pod "kube-controller-manager-embed-certs-559379" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:56.659292 1471064 pod_ready.go:83] waiting for pod "kube-proxy-82pzn" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:57.059218 1471064 pod_ready.go:94] pod "kube-proxy-82pzn" is "Ready"
	I1018 09:33:57.059244 1471064 pod_ready.go:86] duration metric: took 399.924372ms for pod "kube-proxy-82pzn" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:57.259090 1471064 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-559379" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:57.659100 1471064 pod_ready.go:94] pod "kube-scheduler-embed-certs-559379" is "Ready"
	I1018 09:33:57.659128 1471064 pod_ready.go:86] duration metric: took 399.965791ms for pod "kube-scheduler-embed-certs-559379" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:57.659142 1471064 pod_ready.go:40] duration metric: took 37.909220937s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:33:57.712479 1471064 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 09:33:57.715740 1471064 out.go:179] * Done! kubectl is now configured to use "embed-certs-559379" cluster and "default" namespace by default
	I1018 09:33:54.369952 1474687 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-593480:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.918532874s)
	I1018 09:33:54.369987 1474687 kic.go:203] duration metric: took 4.918710625s to extract preloaded images to volume ...
	W1018 09:33:54.370116 1474687 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 09:33:54.370227 1474687 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 09:33:54.424366 1474687 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-593480 --name default-k8s-diff-port-593480 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-593480 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-593480 --network default-k8s-diff-port-593480 --ip 192.168.85.2 --volume default-k8s-diff-port-593480:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 09:33:54.719148 1474687 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-593480 --format={{.State.Running}}
	I1018 09:33:54.743583 1474687 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-593480 --format={{.State.Status}}
	I1018 09:33:54.774304 1474687 cli_runner.go:164] Run: docker exec default-k8s-diff-port-593480 stat /var/lib/dpkg/alternatives/iptables
	I1018 09:33:54.833859 1474687 oci.go:144] the created container "default-k8s-diff-port-593480" has a running status.
	I1018 09:33:54.833894 1474687 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/default-k8s-diff-port-593480/id_rsa...
	I1018 09:33:54.932276 1474687 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/default-k8s-diff-port-593480/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 09:33:54.969701 1474687 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-593480 --format={{.State.Status}}
	I1018 09:33:54.989629 1474687 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 09:33:54.989653 1474687 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-593480 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 09:33:55.044563 1474687 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-593480 --format={{.State.Status}}
	I1018 09:33:55.066007 1474687 machine.go:93] provisionDockerMachine start ...
	I1018 09:33:55.066181 1474687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-593480
	I1018 09:33:55.105060 1474687 main.go:141] libmachine: Using SSH client type: native
	I1018 09:33:55.105506 1474687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34901 <nil> <nil>}
	I1018 09:33:55.105522 1474687 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:33:55.106552 1474687 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53442->127.0.0.1:34901: read: connection reset by peer
	I1018 09:33:58.255268 1474687 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-593480
	
	I1018 09:33:58.255294 1474687 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-593480"
	I1018 09:33:58.255364 1474687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-593480
	I1018 09:33:58.272059 1474687 main.go:141] libmachine: Using SSH client type: native
	I1018 09:33:58.272405 1474687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34901 <nil> <nil>}
	I1018 09:33:58.272425 1474687 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-593480 && echo "default-k8s-diff-port-593480" | sudo tee /etc/hostname
	I1018 09:33:58.429327 1474687 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-593480
	
	I1018 09:33:58.429405 1474687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-593480
	I1018 09:33:58.448156 1474687 main.go:141] libmachine: Using SSH client type: native
	I1018 09:33:58.448468 1474687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34901 <nil> <nil>}
	I1018 09:33:58.448491 1474687 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-593480' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-593480/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-593480' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:33:58.596037 1474687 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:33:58.596106 1474687 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-1274243/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-1274243/.minikube}
	I1018 09:33:58.596145 1474687 ubuntu.go:190] setting up certificates
	I1018 09:33:58.596184 1474687 provision.go:84] configureAuth start
	I1018 09:33:58.596291 1474687 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-593480
	I1018 09:33:58.612921 1474687 provision.go:143] copyHostCerts
	I1018 09:33:58.612981 1474687 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem, removing ...
	I1018 09:33:58.613002 1474687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem
	I1018 09:33:58.613088 1474687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem (1123 bytes)
	I1018 09:33:58.613181 1474687 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem, removing ...
	I1018 09:33:58.613186 1474687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem
	I1018 09:33:58.613213 1474687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem (1675 bytes)
	I1018 09:33:58.613270 1474687 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem, removing ...
	I1018 09:33:58.613275 1474687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem
	I1018 09:33:58.613299 1474687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem (1078 bytes)
	I1018 09:33:58.613353 1474687 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-593480 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-593480 localhost minikube]
	I1018 09:33:59.716366 1474687 provision.go:177] copyRemoteCerts
	I1018 09:33:59.716439 1474687 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:33:59.716491 1474687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-593480
	I1018 09:33:59.736989 1474687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34901 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/default-k8s-diff-port-593480/id_rsa Username:docker}
	I1018 09:33:59.839356 1474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:33:59.856144 1474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:33:59.873730 1474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1018 09:33:59.890413 1474687 provision.go:87] duration metric: took 1.294193706s to configureAuth
	I1018 09:33:59.890479 1474687 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:33:59.890682 1474687 config.go:182] Loaded profile config "default-k8s-diff-port-593480": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:33:59.890792 1474687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-593480
	I1018 09:33:59.908851 1474687 main.go:141] libmachine: Using SSH client type: native
	I1018 09:33:59.909170 1474687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34901 <nil> <nil>}
	I1018 09:33:59.909190 1474687 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:34:00.489958 1474687 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:34:00.489983 1474687 machine.go:96] duration metric: took 5.423958334s to provisionDockerMachine
	I1018 09:34:00.489993 1474687 client.go:171] duration metric: took 11.790672683s to LocalClient.Create
	I1018 09:34:00.490009 1474687 start.go:167] duration metric: took 11.790738059s to libmachine.API.Create "default-k8s-diff-port-593480"
	I1018 09:34:00.490016 1474687 start.go:293] postStartSetup for "default-k8s-diff-port-593480" (driver="docker")
	I1018 09:34:00.490027 1474687 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:34:00.490119 1474687 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:34:00.490170 1474687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-593480
	I1018 09:34:00.509763 1474687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34901 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/default-k8s-diff-port-593480/id_rsa Username:docker}
	I1018 09:34:00.615943 1474687 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:34:00.619388 1474687 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:34:00.619417 1474687 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:34:00.619428 1474687 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-1274243/.minikube/addons for local assets ...
	I1018 09:34:00.619484 1474687 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-1274243/.minikube/files for local assets ...
	I1018 09:34:00.619575 1474687 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem -> 12760972.pem in /etc/ssl/certs
	I1018 09:34:00.619682 1474687 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:34:00.627739 1474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem --> /etc/ssl/certs/12760972.pem (1708 bytes)
	I1018 09:34:00.646371 1474687 start.go:296] duration metric: took 156.338577ms for postStartSetup
	I1018 09:34:00.646772 1474687 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-593480
	I1018 09:34:00.663893 1474687 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/config.json ...
	I1018 09:34:00.664173 1474687 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:34:00.664271 1474687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-593480
	I1018 09:34:00.681468 1474687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34901 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/default-k8s-diff-port-593480/id_rsa Username:docker}
	I1018 09:34:00.780989 1474687 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:34:00.785796 1474687 start.go:128] duration metric: took 12.093022813s to createHost
	I1018 09:34:00.785819 1474687 start.go:83] releasing machines lock for "default-k8s-diff-port-593480", held for 12.093148882s
	I1018 09:34:00.785891 1474687 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-593480
	I1018 09:34:00.802448 1474687 ssh_runner.go:195] Run: cat /version.json
	I1018 09:34:00.802503 1474687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-593480
	I1018 09:34:00.802518 1474687 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:34:00.802592 1474687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-593480
	I1018 09:34:00.826736 1474687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34901 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/default-k8s-diff-port-593480/id_rsa Username:docker}
	I1018 09:34:00.828165 1474687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34901 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/default-k8s-diff-port-593480/id_rsa Username:docker}
	I1018 09:34:01.026900 1474687 ssh_runner.go:195] Run: systemctl --version
	I1018 09:34:01.033345 1474687 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:34:01.070530 1474687 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:34:01.074974 1474687 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:34:01.075098 1474687 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:34:01.108086 1474687 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 09:34:01.108162 1474687 start.go:495] detecting cgroup driver to use...
	I1018 09:34:01.108212 1474687 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 09:34:01.108294 1474687 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:34:01.128625 1474687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:34:01.147064 1474687 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:34:01.147152 1474687 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:34:01.167759 1474687 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:34:01.188987 1474687 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:34:01.312997 1474687 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:34:01.433183 1474687 docker.go:234] disabling docker service ...
	I1018 09:34:01.433289 1474687 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:34:01.454484 1474687 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:34:01.468560 1474687 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:34:01.587887 1474687 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:34:01.706280 1474687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:34:01.719380 1474687 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:34:01.738080 1474687 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:34:01.738190 1474687 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:34:01.747546 1474687 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 09:34:01.747645 1474687 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:34:01.757351 1474687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:34:01.766731 1474687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:34:01.776298 1474687 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:34:01.786913 1474687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:34:01.796438 1474687 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:34:01.812847 1474687 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:34:01.822611 1474687 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:34:01.833645 1474687 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:34:01.841616 1474687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:34:01.987862 1474687 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:34:02.155595 1474687 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:34:02.155890 1474687 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:34:02.165907 1474687 start.go:563] Will wait 60s for crictl version
	I1018 09:34:02.165978 1474687 ssh_runner.go:195] Run: which crictl
	I1018 09:34:02.169957 1474687 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:34:02.197691 1474687 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:34:02.197783 1474687 ssh_runner.go:195] Run: crio --version
	I1018 09:34:02.237603 1474687 ssh_runner.go:195] Run: crio --version
	I1018 09:34:02.277412 1474687 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:34:02.280244 1474687 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-593480 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:34:02.297438 1474687 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 09:34:02.302151 1474687 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:34:02.312443 1474687 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-593480 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-593480 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:34:02.312568 1474687 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:34:02.312624 1474687 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:34:02.353095 1474687 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:34:02.353120 1474687 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:34:02.353177 1474687 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:34:02.382401 1474687 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:34:02.382427 1474687 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:34:02.382437 1474687 kubeadm.go:934] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1018 09:34:02.382529 1474687 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-593480 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-593480 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:34:02.382618 1474687 ssh_runner.go:195] Run: crio config
	I1018 09:34:02.449689 1474687 cni.go:84] Creating CNI manager for ""
	I1018 09:34:02.449717 1474687 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:34:02.449733 1474687 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:34:02.449777 1474687 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-593480 NodeName:default-k8s-diff-port-593480 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:34:02.449940 1474687 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-593480"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:34:02.450017 1474687 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:34:02.458772 1474687 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:34:02.458849 1474687 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:34:02.467391 1474687 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1018 09:34:02.480233 1474687 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:34:02.493101 1474687 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1018 09:34:02.506730 1474687 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:34:02.510436 1474687 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:34:02.520320 1474687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:34:02.632848 1474687 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:34:02.652548 1474687 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480 for IP: 192.168.85.2
	I1018 09:34:02.652632 1474687 certs.go:195] generating shared ca certs ...
	I1018 09:34:02.652664 1474687 certs.go:227] acquiring lock for ca certs: {Name:mk38b7543cc5a8f0209b20a590824c94ad8d0609 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:34:02.652848 1474687 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.key
	I1018 09:34:02.652937 1474687 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.key
	I1018 09:34:02.652971 1474687 certs.go:257] generating profile certs ...
	I1018 09:34:02.653047 1474687 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/client.key
	I1018 09:34:02.653084 1474687 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/client.crt with IP's: []
	I1018 09:34:03.075274 1474687 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/client.crt ...
	I1018 09:34:03.075308 1474687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/client.crt: {Name:mk353702f41496c5887bc703787e83a6b9652bc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:34:03.075556 1474687 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/client.key ...
	I1018 09:34:03.075572 1474687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/client.key: {Name:mk2dab06626ab6977b325fa4ad2d3ba5fcae2043 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:34:03.075729 1474687 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/apiserver.key.3ec3eca5
	I1018 09:34:03.075758 1474687 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/apiserver.crt.3ec3eca5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1018 09:34:03.585561 1474687 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/apiserver.crt.3ec3eca5 ...
	I1018 09:34:03.585593 1474687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/apiserver.crt.3ec3eca5: {Name:mk9b711777f0503a3fada68d61b8c155c50a057a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:34:03.585778 1474687 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/apiserver.key.3ec3eca5 ...
	I1018 09:34:03.585799 1474687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/apiserver.key.3ec3eca5: {Name:mka3f2c262afdf16e83da78076b3f91571280d44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:34:03.585890 1474687 certs.go:382] copying /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/apiserver.crt.3ec3eca5 -> /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/apiserver.crt
	I1018 09:34:03.585981 1474687 certs.go:386] copying /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/apiserver.key.3ec3eca5 -> /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/apiserver.key
	I1018 09:34:03.586043 1474687 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/proxy-client.key
	I1018 09:34:03.586063 1474687 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/proxy-client.crt with IP's: []
	I1018 09:34:04.154396 1474687 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/proxy-client.crt ...
	I1018 09:34:04.154428 1474687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/proxy-client.crt: {Name:mk34b9ba2e1ebf0c882833d7b8f1337797571013 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:34:04.154612 1474687 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/proxy-client.key ...
	I1018 09:34:04.154626 1474687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/proxy-client.key: {Name:mkd1feca4d14672ebd2446f2b1978df69cd0a9ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:34:04.154801 1474687 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097.pem (1338 bytes)
	W1018 09:34:04.154846 1474687 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097_empty.pem, impossibly tiny 0 bytes
	I1018 09:34:04.154860 1474687 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:34:04.154885 1474687 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:34:04.154911 1474687 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:34:04.154936 1474687 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem (1675 bytes)
	I1018 09:34:04.154986 1474687 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem (1708 bytes)
	I1018 09:34:04.155582 1474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:34:04.179806 1474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 09:34:04.199895 1474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:34:04.224489 1474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:34:04.241381 1474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1018 09:34:04.259021 1474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 09:34:04.276943 1474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:34:04.294159 1474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 09:34:04.311408 1474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097.pem --> /usr/share/ca-certificates/1276097.pem (1338 bytes)
	I1018 09:34:04.329620 1474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem --> /usr/share/ca-certificates/12760972.pem (1708 bytes)
	I1018 09:34:04.346631 1474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:34:04.365114 1474687 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:34:04.378366 1474687 ssh_runner.go:195] Run: openssl version
	I1018 09:34:04.386494 1474687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1276097.pem && ln -fs /usr/share/ca-certificates/1276097.pem /etc/ssl/certs/1276097.pem"
	I1018 09:34:04.395831 1474687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1276097.pem
	I1018 09:34:04.399558 1474687 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 08:36 /usr/share/ca-certificates/1276097.pem
	I1018 09:34:04.399622 1474687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1276097.pem
	I1018 09:34:04.441473 1474687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1276097.pem /etc/ssl/certs/51391683.0"
	I1018 09:34:04.449715 1474687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12760972.pem && ln -fs /usr/share/ca-certificates/12760972.pem /etc/ssl/certs/12760972.pem"
	I1018 09:34:04.458537 1474687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12760972.pem
	I1018 09:34:04.462390 1474687 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 08:36 /usr/share/ca-certificates/12760972.pem
	I1018 09:34:04.462449 1474687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12760972.pem
	I1018 09:34:04.503647 1474687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12760972.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:34:04.511916 1474687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:34:04.520139 1474687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:34:04.523812 1474687 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:34:04.523964 1474687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:34:04.565171 1474687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:34:04.573454 1474687 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:34:04.576740 1474687 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 09:34:04.576790 1474687 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-593480 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-593480 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:34:04.576872 1474687 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:34:04.576943 1474687 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:34:04.603451 1474687 cri.go:89] found id: ""
	I1018 09:34:04.603523 1474687 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:34:04.611653 1474687 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 09:34:04.619263 1474687 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 09:34:04.619329 1474687 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 09:34:04.626951 1474687 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 09:34:04.626974 1474687 kubeadm.go:157] found existing configuration files:
	
	I1018 09:34:04.627022 1474687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1018 09:34:04.634542 1474687 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 09:34:04.634615 1474687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 09:34:04.642402 1474687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1018 09:34:04.650009 1474687 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 09:34:04.650073 1474687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 09:34:04.657509 1474687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1018 09:34:04.665316 1474687 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 09:34:04.665380 1474687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 09:34:04.672588 1474687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1018 09:34:04.679967 1474687 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 09:34:04.680034 1474687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 09:34:04.687260 1474687 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 09:34:04.724214 1474687 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 09:34:04.724282 1474687 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 09:34:04.746859 1474687 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 09:34:04.746939 1474687 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 09:34:04.746981 1474687 kubeadm.go:318] OS: Linux
	I1018 09:34:04.747033 1474687 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 09:34:04.747090 1474687 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1018 09:34:04.747147 1474687 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 09:34:04.747208 1474687 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 09:34:04.747271 1474687 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 09:34:04.747323 1474687 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 09:34:04.747375 1474687 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 09:34:04.747428 1474687 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 09:34:04.747481 1474687 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1018 09:34:04.816475 1474687 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 09:34:04.816606 1474687 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 09:34:04.816712 1474687 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 09:34:04.824199 1474687 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 09:34:04.829985 1474687 out.go:252]   - Generating certificates and keys ...
	I1018 09:34:04.830097 1474687 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 09:34:04.830180 1474687 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 09:34:05.279691 1474687 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 09:34:06.592986 1474687 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 09:34:07.095610 1474687 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 09:34:07.353562 1474687 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 09:34:07.555523 1474687 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 09:34:07.555716 1474687 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-593480 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 09:34:07.879873 1474687 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 09:34:07.880299 1474687 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-593480 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 09:34:08.315676 1474687 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 09:34:08.598375 1474687 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 09:34:09.000621 1474687 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 09:34:09.000956 1474687 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 09:34:10.163835 1474687 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 09:34:10.976789 1474687 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 09:34:11.245888 1474687 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 09:34:11.707525 1474687 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 09:34:11.853269 1474687 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 09:34:11.854230 1474687 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 09:34:11.858288 1474687 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	
	==> CRI-O <==
	Oct 18 09:33:49 embed-certs-559379 crio[656]: time="2025-10-18T09:33:49.435782987Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b6340802-64fa-40dd-b958-96c9c8582e60 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:33:49 embed-certs-559379 crio[656]: time="2025-10-18T09:33:49.437598149Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=14609d96-96b8-4bee-9423-83397399e27b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:33:49 embed-certs-559379 crio[656]: time="2025-10-18T09:33:49.437844Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:33:49 embed-certs-559379 crio[656]: time="2025-10-18T09:33:49.443893842Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:33:49 embed-certs-559379 crio[656]: time="2025-10-18T09:33:49.444077697Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/56f4dec220a495d8ffedf9771d7f3c362bcf37fd7737ac616afe99ca6a81ac9b/merged/etc/passwd: no such file or directory"
	Oct 18 09:33:49 embed-certs-559379 crio[656]: time="2025-10-18T09:33:49.444100999Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/56f4dec220a495d8ffedf9771d7f3c362bcf37fd7737ac616afe99ca6a81ac9b/merged/etc/group: no such file or directory"
	Oct 18 09:33:49 embed-certs-559379 crio[656]: time="2025-10-18T09:33:49.444374928Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:33:49 embed-certs-559379 crio[656]: time="2025-10-18T09:33:49.474984012Z" level=info msg="Created container 5de25f73c22dded133ad60394a22310d0c8b5f7494398d4c19b2a0b733701d76: kube-system/storage-provisioner/storage-provisioner" id=14609d96-96b8-4bee-9423-83397399e27b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:33:49 embed-certs-559379 crio[656]: time="2025-10-18T09:33:49.476280689Z" level=info msg="Starting container: 5de25f73c22dded133ad60394a22310d0c8b5f7494398d4c19b2a0b733701d76" id=29a7dfe5-ff16-44b1-b330-de6fe24c47ef name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:33:49 embed-certs-559379 crio[656]: time="2025-10-18T09:33:49.479833124Z" level=info msg="Started container" PID=1651 containerID=5de25f73c22dded133ad60394a22310d0c8b5f7494398d4c19b2a0b733701d76 description=kube-system/storage-provisioner/storage-provisioner id=29a7dfe5-ff16-44b1-b330-de6fe24c47ef name=/runtime.v1.RuntimeService/StartContainer sandboxID=b8a98d5b13c936582e07af1e06ee31df1d56e0c72c0413ecc79c2747e9d3a2cc
	Oct 18 09:33:59 embed-certs-559379 crio[656]: time="2025-10-18T09:33:59.221886812Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:33:59 embed-certs-559379 crio[656]: time="2025-10-18T09:33:59.236142136Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:33:59 embed-certs-559379 crio[656]: time="2025-10-18T09:33:59.236324531Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:33:59 embed-certs-559379 crio[656]: time="2025-10-18T09:33:59.236397956Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:33:59 embed-certs-559379 crio[656]: time="2025-10-18T09:33:59.24923275Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:33:59 embed-certs-559379 crio[656]: time="2025-10-18T09:33:59.249767881Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:33:59 embed-certs-559379 crio[656]: time="2025-10-18T09:33:59.250040038Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:33:59 embed-certs-559379 crio[656]: time="2025-10-18T09:33:59.268143829Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:33:59 embed-certs-559379 crio[656]: time="2025-10-18T09:33:59.268328029Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:33:59 embed-certs-559379 crio[656]: time="2025-10-18T09:33:59.268413507Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:33:59 embed-certs-559379 crio[656]: time="2025-10-18T09:33:59.280221474Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:33:59 embed-certs-559379 crio[656]: time="2025-10-18T09:33:59.28040583Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:33:59 embed-certs-559379 crio[656]: time="2025-10-18T09:33:59.280499349Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:33:59 embed-certs-559379 crio[656]: time="2025-10-18T09:33:59.288191196Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:33:59 embed-certs-559379 crio[656]: time="2025-10-18T09:33:59.288358141Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	5de25f73c22dd       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           23 seconds ago       Running             storage-provisioner         2                   b8a98d5b13c93       storage-provisioner                          kube-system
	684cd6eff48e4       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           28 seconds ago       Exited              dashboard-metrics-scraper   2                   2b5e648e0b54c       dashboard-metrics-scraper-6ffb444bf9-s9n4f   kubernetes-dashboard
	0e3ede05f52a8       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   42 seconds ago       Running             kubernetes-dashboard        0                   ae28d7cca8a0b       kubernetes-dashboard-855c9754f9-d75lm        kubernetes-dashboard
	0024b85e5ef70       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           54 seconds ago       Running             coredns                     1                   6989432ac6f21       coredns-66bc5c9577-t9blq                     kube-system
	04f31e69246a0       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           54 seconds ago       Running             busybox                     1                   2ed8bf9d6a331       busybox                                      default
	f33068c7b92cd       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           54 seconds ago       Running             kindnet-cni                 1                   543899513fc98       kindnet-6ltrq                                kube-system
	c1528bdaee222       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           54 seconds ago       Running             kube-proxy                  1                   fe6cd3b1aa39e       kube-proxy-82pzn                             kube-system
	90117d3668eec       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           54 seconds ago       Exited              storage-provisioner         1                   b8a98d5b13c93       storage-provisioner                          kube-system
	107f423c47421       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   530278d08635d       etcd-embed-certs-559379                      kube-system
	9e4bebc346e34       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   718e0bf2093a3       kube-controller-manager-embed-certs-559379   kube-system
	1fa42435e829f       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   9ddf0b97b5da7       kube-apiserver-embed-certs-559379            kube-system
	836750ba87758       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   9762eb5edde86       kube-scheduler-embed-certs-559379            kube-system
	
	
	==> coredns [0024b85e5ef70b9bf4ed4dae2ee9734cd4169e59884bd9f20cf0fa1d53ab8b4d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48773 - 2999 "HINFO IN 7994601189480743672.2215154079161539457. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014071895s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-559379
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-559379
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820
	                    minikube.k8s.io/name=embed-certs-559379
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_31_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:31:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-559379
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:34:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:33:48 +0000   Sat, 18 Oct 2025 09:31:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:33:48 +0000   Sat, 18 Oct 2025 09:31:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:33:48 +0000   Sat, 18 Oct 2025 09:31:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:33:48 +0000   Sat, 18 Oct 2025 09:32:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-559379
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                963b98db-af62-4b5f-9ed9-d04f81062030
	  Boot ID:                    65523ab2-bf15-4ba4-9086-da57024e96a9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-t9blq                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m19s
	  kube-system                 etcd-embed-certs-559379                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m26s
	  kube-system                 kindnet-6ltrq                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m19s
	  kube-system                 kube-apiserver-embed-certs-559379             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-controller-manager-embed-certs-559379    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-proxy-82pzn                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 kube-scheduler-embed-certs-559379             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-s9n4f    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-d75lm         0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m17s                  kube-proxy       
	  Normal   Starting                 53s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m36s (x8 over 2m36s)  kubelet          Node embed-certs-559379 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m36s (x8 over 2m36s)  kubelet          Node embed-certs-559379 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m36s (x8 over 2m36s)  kubelet          Node embed-certs-559379 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m25s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m25s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m24s                  kubelet          Node embed-certs-559379 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m24s                  kubelet          Node embed-certs-559379 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m24s                  kubelet          Node embed-certs-559379 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m20s                  node-controller  Node embed-certs-559379 event: Registered Node embed-certs-559379 in Controller
	  Normal   NodeReady                97s                    kubelet          Node embed-certs-559379 status is now: NodeReady
	  Normal   Starting                 61s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s (x8 over 61s)      kubelet          Node embed-certs-559379 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s (x8 over 61s)      kubelet          Node embed-certs-559379 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s (x8 over 61s)      kubelet          Node embed-certs-559379 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           53s                    node-controller  Node embed-certs-559379 event: Registered Node embed-certs-559379 in Controller
	
	
	==> dmesg <==
	[Oct18 09:11] overlayfs: idmapped layers are currently not supported
	[Oct18 09:12] overlayfs: idmapped layers are currently not supported
	[Oct18 09:13] overlayfs: idmapped layers are currently not supported
	[  +9.741593] overlayfs: idmapped layers are currently not supported
	[Oct18 09:14] overlayfs: idmapped layers are currently not supported
	[ +29.155522] overlayfs: idmapped layers are currently not supported
	[Oct18 09:15] overlayfs: idmapped layers are currently not supported
	[ +11.661984] kauditd_printk_skb: 8 callbacks suppressed
	[ +12.494912] overlayfs: idmapped layers are currently not supported
	[Oct18 09:16] overlayfs: idmapped layers are currently not supported
	[Oct18 09:18] overlayfs: idmapped layers are currently not supported
	[Oct18 09:20] overlayfs: idmapped layers are currently not supported
	[  +1.396393] overlayfs: idmapped layers are currently not supported
	[Oct18 09:22] overlayfs: idmapped layers are currently not supported
	[ +27.782946] overlayfs: idmapped layers are currently not supported
	[Oct18 09:25] overlayfs: idmapped layers are currently not supported
	[Oct18 09:26] overlayfs: idmapped layers are currently not supported
	[Oct18 09:27] overlayfs: idmapped layers are currently not supported
	[Oct18 09:28] overlayfs: idmapped layers are currently not supported
	[ +34.309418] overlayfs: idmapped layers are currently not supported
	[Oct18 09:30] overlayfs: idmapped layers are currently not supported
	[Oct18 09:31] overlayfs: idmapped layers are currently not supported
	[ +30.479187] overlayfs: idmapped layers are currently not supported
	[Oct18 09:32] overlayfs: idmapped layers are currently not supported
	[Oct18 09:33] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [107f423c474214a77b70bea579d8693f96941573e52099aab36ca04cad80b9fb] <==
	{"level":"warn","ts":"2025-10-18T09:33:15.943714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:33:15.995898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:33:16.002650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:33:16.020344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:33:16.056220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:33:16.058396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:33:16.076746Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:33:16.104302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:33:16.129205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:33:16.145265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:33:16.170975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:33:16.180504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:33:16.210688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:33:16.219250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:33:16.236524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:33:16.259152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:33:16.271635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:33:16.289855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:33:16.308321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:33:16.339968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:33:16.391814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:33:16.419111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:33:16.443502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:33:16.461519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:33:16.532330Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60914","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:34:13 up 11:16,  0 user,  load average: 2.79, 3.15, 2.67
	Linux embed-certs-559379 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f33068c7b92cd4cb6c77a59c4716cbd106f6ac2f31c1ec7071a66ecb90a2813e] <==
	I1018 09:33:19.018294       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:33:19.018495       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 09:33:19.018618       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:33:19.018629       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:33:19.018639       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:33:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:33:19.219735       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:33:19.219811       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:33:19.222777       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:33:19.223751       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 09:33:49.220618       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 09:33:49.224199       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 09:33:49.224356       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 09:33:49.224477       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1018 09:33:50.123986       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:33:50.124108       1 metrics.go:72] Registering metrics
	I1018 09:33:50.124204       1 controller.go:711] "Syncing nftables rules"
	I1018 09:33:59.221570       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:33:59.221621       1 main.go:301] handling current node
	I1018 09:34:09.224618       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:34:09.224672       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1fa42435e829fa1ff7a0af9be9dc7035e7cc16ae52106466d057fafcbaf6e9bb] <==
	I1018 09:33:17.889229       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 09:33:17.889444       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 09:33:17.889473       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 09:33:17.889770       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 09:33:17.889932       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:33:17.889972       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 09:33:17.889991       1 policy_source.go:240] refreshing policies
	I1018 09:33:17.897675       1 aggregator.go:171] initial CRD sync complete...
	I1018 09:33:17.897708       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 09:33:17.897716       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 09:33:17.897722       1 cache.go:39] Caches are synced for autoregister controller
	I1018 09:33:17.904540       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 09:33:17.914071       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1018 09:33:17.942252       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 09:33:18.230992       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 09:33:18.348063       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:33:19.204932       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 09:33:19.323453       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 09:33:19.428164       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:33:19.472183       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:33:19.617285       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.121.5"}
	I1018 09:33:19.642759       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.56.161"}
	I1018 09:33:20.831146       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 09:33:21.387596       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 09:33:21.429146       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [9e4bebc346e34245095acfdc99e4bf27d586ba1008354824cc3842710f552d3d] <==
	I1018 09:33:20.837116       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:33:20.837609       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 09:33:20.843176       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 09:33:20.847493       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 09:33:20.847805       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 09:33:20.848049       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 09:33:20.848069       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 09:33:20.849076       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 09:33:20.851932       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 09:33:20.859333       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 09:33:20.859623       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 09:33:20.859937       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 09:33:20.868173       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 09:33:20.868294       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:33:20.872499       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 09:33:20.873391       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 09:33:20.873430       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 09:33:20.873401       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 09:33:20.877002       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:33:20.877024       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 09:33:20.877036       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 09:33:20.882128       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 09:33:20.885405       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:33:20.892650       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 09:33:21.443613       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [c1528bdaee222f9277110c1d5151cc9bfb3371a213419bb2ef053388848c0a56] <==
	I1018 09:33:19.409025       1 server_linux.go:53] "Using iptables proxy"
	I1018 09:33:19.693319       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 09:33:19.894809       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:33:19.894901       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 09:33:19.895027       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:33:19.949252       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:33:19.949302       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:33:19.954049       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:33:19.954570       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:33:19.954638       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:33:19.958406       1 config.go:200] "Starting service config controller"
	I1018 09:33:19.958427       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:33:19.958440       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:33:19.958444       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:33:19.958452       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:33:19.958460       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:33:19.959086       1 config.go:309] "Starting node config controller"
	I1018 09:33:19.959097       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:33:19.959104       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 09:33:20.058992       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 09:33:20.058998       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 09:33:20.059036       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [836750ba877589f7642d95bcc7eaea0db209e4198f52173d3d62e2a5392defad] <==
	I1018 09:33:16.374175       1 serving.go:386] Generated self-signed cert in-memory
	I1018 09:33:19.877900       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 09:33:19.877997       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:33:19.883492       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 09:33:19.883684       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:33:19.883870       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:33:19.883667       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1018 09:33:19.883956       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1018 09:33:19.883707       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 09:33:19.887949       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 09:33:19.883720       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 09:33:19.984082       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1018 09:33:19.984476       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:33:19.998068       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:33:21 embed-certs-559379 kubelet[782]: I1018 09:33:21.447435     782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdc5b\" (UniqueName: \"kubernetes.io/projected/da0e8792-a12a-47e9-9b51-18561a66da84-kube-api-access-cdc5b\") pod \"dashboard-metrics-scraper-6ffb444bf9-s9n4f\" (UID: \"da0e8792-a12a-47e9-9b51-18561a66da84\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s9n4f"
	Oct 18 09:33:21 embed-certs-559379 kubelet[782]: W1018 09:33:21.666043     782 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/28d5892e22acbb4aee9ab8966787a91744522fa04b863c7570f04701f5c19fa0/crio-2b5e648e0b54c81294b0b4e409d6f75789d59d5f925585a9c73f39f2dd180ba7 WatchSource:0}: Error finding container 2b5e648e0b54c81294b0b4e409d6f75789d59d5f925585a9c73f39f2dd180ba7: Status 404 returned error can't find the container with id 2b5e648e0b54c81294b0b4e409d6f75789d59d5f925585a9c73f39f2dd180ba7
	Oct 18 09:33:21 embed-certs-559379 kubelet[782]: W1018 09:33:21.681208     782 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/28d5892e22acbb4aee9ab8966787a91744522fa04b863c7570f04701f5c19fa0/crio-ae28d7cca8a0b039b6993cf9b09d28cce1b40e1e6e9d08c5c79b42ce64691dc5 WatchSource:0}: Error finding container ae28d7cca8a0b039b6993cf9b09d28cce1b40e1e6e9d08c5c79b42ce64691dc5: Status 404 returned error can't find the container with id ae28d7cca8a0b039b6993cf9b09d28cce1b40e1e6e9d08c5c79b42ce64691dc5
	Oct 18 09:33:25 embed-certs-559379 kubelet[782]: I1018 09:33:25.801548     782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 18 09:33:26 embed-certs-559379 kubelet[782]: I1018 09:33:26.350366     782 scope.go:117] "RemoveContainer" containerID="78906d6e8c9810d0981130a18266a4a011baf6acf5cb43923a0659cb06338721"
	Oct 18 09:33:27 embed-certs-559379 kubelet[782]: I1018 09:33:27.355791     782 scope.go:117] "RemoveContainer" containerID="770e4323a7648d40dfe70f9a3a878d3f6ed519c0dc490df270fc89b69d4a63f1"
	Oct 18 09:33:27 embed-certs-559379 kubelet[782]: E1018 09:33:27.356020     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s9n4f_kubernetes-dashboard(da0e8792-a12a-47e9-9b51-18561a66da84)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s9n4f" podUID="da0e8792-a12a-47e9-9b51-18561a66da84"
	Oct 18 09:33:27 embed-certs-559379 kubelet[782]: I1018 09:33:27.356242     782 scope.go:117] "RemoveContainer" containerID="78906d6e8c9810d0981130a18266a4a011baf6acf5cb43923a0659cb06338721"
	Oct 18 09:33:28 embed-certs-559379 kubelet[782]: I1018 09:33:28.360902     782 scope.go:117] "RemoveContainer" containerID="770e4323a7648d40dfe70f9a3a878d3f6ed519c0dc490df270fc89b69d4a63f1"
	Oct 18 09:33:28 embed-certs-559379 kubelet[782]: E1018 09:33:28.361061     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s9n4f_kubernetes-dashboard(da0e8792-a12a-47e9-9b51-18561a66da84)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s9n4f" podUID="da0e8792-a12a-47e9-9b51-18561a66da84"
	Oct 18 09:33:30 embed-certs-559379 kubelet[782]: I1018 09:33:30.160773     782 scope.go:117] "RemoveContainer" containerID="770e4323a7648d40dfe70f9a3a878d3f6ed519c0dc490df270fc89b69d4a63f1"
	Oct 18 09:33:30 embed-certs-559379 kubelet[782]: E1018 09:33:30.160997     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s9n4f_kubernetes-dashboard(da0e8792-a12a-47e9-9b51-18561a66da84)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s9n4f" podUID="da0e8792-a12a-47e9-9b51-18561a66da84"
	Oct 18 09:33:45 embed-certs-559379 kubelet[782]: I1018 09:33:45.208894     782 scope.go:117] "RemoveContainer" containerID="770e4323a7648d40dfe70f9a3a878d3f6ed519c0dc490df270fc89b69d4a63f1"
	Oct 18 09:33:45 embed-certs-559379 kubelet[782]: I1018 09:33:45.420779     782 scope.go:117] "RemoveContainer" containerID="770e4323a7648d40dfe70f9a3a878d3f6ed519c0dc490df270fc89b69d4a63f1"
	Oct 18 09:33:45 embed-certs-559379 kubelet[782]: I1018 09:33:45.421281     782 scope.go:117] "RemoveContainer" containerID="684cd6eff48e43cc623fa04e58de4efa0f8efaa9dc177f98042f63ff74357b47"
	Oct 18 09:33:45 embed-certs-559379 kubelet[782]: E1018 09:33:45.421501     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s9n4f_kubernetes-dashboard(da0e8792-a12a-47e9-9b51-18561a66da84)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s9n4f" podUID="da0e8792-a12a-47e9-9b51-18561a66da84"
	Oct 18 09:33:45 embed-certs-559379 kubelet[782]: I1018 09:33:45.448679     782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-d75lm" podStartSLOduration=15.35494817 podStartE2EDuration="24.448658734s" podCreationTimestamp="2025-10-18 09:33:21 +0000 UTC" firstStartedPulling="2025-10-18 09:33:21.689587675 +0000 UTC m=+9.669579778" lastFinishedPulling="2025-10-18 09:33:30.783298247 +0000 UTC m=+18.763290342" observedRunningTime="2025-10-18 09:33:31.392839837 +0000 UTC m=+19.372831940" watchObservedRunningTime="2025-10-18 09:33:45.448658734 +0000 UTC m=+33.428650837"
	Oct 18 09:33:49 embed-certs-559379 kubelet[782]: I1018 09:33:49.434367     782 scope.go:117] "RemoveContainer" containerID="90117d3668eec91cac997ca9f7c2efbc2a28365287180d51f10287bfcca9e046"
	Oct 18 09:33:50 embed-certs-559379 kubelet[782]: I1018 09:33:50.161424     782 scope.go:117] "RemoveContainer" containerID="684cd6eff48e43cc623fa04e58de4efa0f8efaa9dc177f98042f63ff74357b47"
	Oct 18 09:33:50 embed-certs-559379 kubelet[782]: E1018 09:33:50.161641     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s9n4f_kubernetes-dashboard(da0e8792-a12a-47e9-9b51-18561a66da84)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s9n4f" podUID="da0e8792-a12a-47e9-9b51-18561a66da84"
	Oct 18 09:34:02 embed-certs-559379 kubelet[782]: I1018 09:34:02.210179     782 scope.go:117] "RemoveContainer" containerID="684cd6eff48e43cc623fa04e58de4efa0f8efaa9dc177f98042f63ff74357b47"
	Oct 18 09:34:02 embed-certs-559379 kubelet[782]: E1018 09:34:02.210863     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s9n4f_kubernetes-dashboard(da0e8792-a12a-47e9-9b51-18561a66da84)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s9n4f" podUID="da0e8792-a12a-47e9-9b51-18561a66da84"
	Oct 18 09:34:10 embed-certs-559379 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 09:34:10 embed-certs-559379 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 09:34:10 embed-certs-559379 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [0e3ede05f52a881ea5d8c2a1b82dd79a395d0f564a7b3fd96fd62e991cc448db] <==
	2025/10/18 09:33:30 Using namespace: kubernetes-dashboard
	2025/10/18 09:33:30 Using in-cluster config to connect to apiserver
	2025/10/18 09:33:30 Using secret token for csrf signing
	2025/10/18 09:33:30 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 09:33:30 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 09:33:30 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 09:33:30 Generating JWE encryption key
	2025/10/18 09:33:30 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 09:33:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 09:33:31 Initializing JWE encryption key from synchronized object
	2025/10/18 09:33:31 Creating in-cluster Sidecar client
	2025/10/18 09:33:31 Serving insecurely on HTTP port: 9090
	2025/10/18 09:33:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 09:34:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 09:33:30 Starting overwatch
	
	
	==> storage-provisioner [5de25f73c22dded133ad60394a22310d0c8b5f7494398d4c19b2a0b733701d76] <==
	I1018 09:33:49.504432       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 09:33:49.526438       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 09:33:49.526700       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 09:33:49.532214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:33:52.987021       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:33:57.247659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:34:00.845578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:34:03.899361       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:34:06.928365       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:34:06.954014       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:34:06.954285       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 09:34:06.955886       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-559379_90e7dc75-2af5-4f66-b8f9-759888fc9276!
	I1018 09:34:06.956611       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a2540f48-50f2-4174-a7e5-a267c71bfb5e", APIVersion:"v1", ResourceVersion:"694", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-559379_90e7dc75-2af5-4f66-b8f9-759888fc9276 became leader
	W1018 09:34:06.962731       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:34:06.976139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:34:07.056476       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-559379_90e7dc75-2af5-4f66-b8f9-759888fc9276!
	W1018 09:34:08.979758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:34:08.988512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:34:10.992109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:34:10.996781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:34:13.000351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:34:13.014894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [90117d3668eec91cac997ca9f7c2efbc2a28365287180d51f10287bfcca9e046] <==
	I1018 09:33:19.059504       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 09:33:49.061222       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-559379 -n embed-certs-559379
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-559379 -n embed-certs-559379: exit status 2 (520.851782ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-559379 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-559379
helpers_test.go:243: (dbg) docker inspect embed-certs-559379:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "28d5892e22acbb4aee9ab8966787a91744522fa04b863c7570f04701f5c19fa0",
	        "Created": "2025-10-18T09:31:14.969231495Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1471190,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:33:04.511564204Z",
	            "FinishedAt": "2025-10-18T09:33:03.51320584Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/28d5892e22acbb4aee9ab8966787a91744522fa04b863c7570f04701f5c19fa0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/28d5892e22acbb4aee9ab8966787a91744522fa04b863c7570f04701f5c19fa0/hostname",
	        "HostsPath": "/var/lib/docker/containers/28d5892e22acbb4aee9ab8966787a91744522fa04b863c7570f04701f5c19fa0/hosts",
	        "LogPath": "/var/lib/docker/containers/28d5892e22acbb4aee9ab8966787a91744522fa04b863c7570f04701f5c19fa0/28d5892e22acbb4aee9ab8966787a91744522fa04b863c7570f04701f5c19fa0-json.log",
	        "Name": "/embed-certs-559379",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-559379:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-559379",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "28d5892e22acbb4aee9ab8966787a91744522fa04b863c7570f04701f5c19fa0",
	                "LowerDir": "/var/lib/docker/overlay2/0e2f9d97ea0ae68fd5ebc6259193de39e6a86f87d5364288929a3a18dfb61772-init/diff:/var/lib/docker/overlay2/60519750c2737db0dfdb37adf468f6414129c2b5dcb7218ea59afbc63bfd537f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0e2f9d97ea0ae68fd5ebc6259193de39e6a86f87d5364288929a3a18dfb61772/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0e2f9d97ea0ae68fd5ebc6259193de39e6a86f87d5364288929a3a18dfb61772/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0e2f9d97ea0ae68fd5ebc6259193de39e6a86f87d5364288929a3a18dfb61772/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-559379",
	                "Source": "/var/lib/docker/volumes/embed-certs-559379/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-559379",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-559379",
	                "name.minikube.sigs.k8s.io": "embed-certs-559379",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8e5475df9cd643e27d5a7ade26ee37641474b730114d47e60cab35edb74a60e8",
	            "SandboxKey": "/var/run/docker/netns/8e5475df9cd6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34896"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34897"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34900"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34898"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34899"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-559379": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:45:4a:d4:27:3b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c6157e554c859f57a7166278cd1d0343828367a13a26ff7877c8ce4c80e272af",
	                    "EndpointID": "7811e1d2d5d28dcfb90525e53fae320eca11d7206bb5416986ebf7a0d7cdcdf6",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-559379",
	                        "28d5892e22ac"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-559379 -n embed-certs-559379
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-559379 -n embed-certs-559379: exit status 2 (509.938528ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-559379 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-559379 logs -n 25: (1.856162985s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-136598 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-136598       │ jenkins │ v1.37.0 │ 18 Oct 25 09:29 UTC │ 18 Oct 25 09:30 UTC │
	│ start   │ -p cert-expiration-854768 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-854768       │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │ 18 Oct 25 09:30 UTC │
	│ delete  │ -p cert-expiration-854768                                                                                                                                                                                                                     │ cert-expiration-854768       │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │ 18 Oct 25 09:30 UTC │
	│ image   │ old-k8s-version-136598 image list --format=json                                                                                                                                                                                               │ old-k8s-version-136598       │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │ 18 Oct 25 09:30 UTC │
	│ pause   │ -p old-k8s-version-136598 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-136598       │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │                     │
	│ start   │ -p no-preload-886951 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │ 18 Oct 25 09:32 UTC │
	│ delete  │ -p old-k8s-version-136598                                                                                                                                                                                                                     │ old-k8s-version-136598       │ jenkins │ v1.37.0 │ 18 Oct 25 09:31 UTC │ 18 Oct 25 09:31 UTC │
	│ delete  │ -p old-k8s-version-136598                                                                                                                                                                                                                     │ old-k8s-version-136598       │ jenkins │ v1.37.0 │ 18 Oct 25 09:31 UTC │ 18 Oct 25 09:31 UTC │
	│ start   │ -p embed-certs-559379 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:31 UTC │ 18 Oct 25 09:32 UTC │
	│ addons  │ enable metrics-server -p no-preload-886951 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │                     │
	│ stop    │ -p no-preload-886951 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │ 18 Oct 25 09:32 UTC │
	│ addons  │ enable dashboard -p no-preload-886951 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │ 18 Oct 25 09:32 UTC │
	│ start   │ -p no-preload-886951 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │ 18 Oct 25 09:33 UTC │
	│ addons  │ enable metrics-server -p embed-certs-559379 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │                     │
	│ stop    │ -p embed-certs-559379 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │ 18 Oct 25 09:33 UTC │
	│ addons  │ enable dashboard -p embed-certs-559379 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:33 UTC │
	│ start   │ -p embed-certs-559379 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:33 UTC │
	│ image   │ no-preload-886951 image list --format=json                                                                                                                                                                                                    │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:33 UTC │
	│ pause   │ -p no-preload-886951 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │                     │
	│ delete  │ -p no-preload-886951                                                                                                                                                                                                                          │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:33 UTC │
	│ delete  │ -p no-preload-886951                                                                                                                                                                                                                          │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:33 UTC │
	│ delete  │ -p disable-driver-mounts-877810                                                                                                                                                                                                               │ disable-driver-mounts-877810 │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:33 UTC │
	│ start   │ -p default-k8s-diff-port-593480 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-593480 │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │                     │
	│ image   │ embed-certs-559379 image list --format=json                                                                                                                                                                                                   │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │ 18 Oct 25 09:34 UTC │
	│ pause   │ -p embed-certs-559379 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:33:48
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:33:48.478205 1474687 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:33:48.478322 1474687 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:33:48.478331 1474687 out.go:374] Setting ErrFile to fd 2...
	I1018 09:33:48.478336 1474687 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:33:48.478603 1474687 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 09:33:48.479022 1474687 out.go:368] Setting JSON to false
	I1018 09:33:48.480065 1474687 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":40576,"bootTime":1760739453,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1018 09:33:48.480135 1474687 start.go:141] virtualization:  
	I1018 09:33:48.483806 1474687 out.go:179] * [default-k8s-diff-port-593480] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 09:33:48.487762 1474687 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 09:33:48.487902 1474687 notify.go:220] Checking for updates...
	I1018 09:33:48.493914 1474687 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:33:48.496916 1474687 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 09:33:48.500013 1474687 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-1274243/.minikube
	I1018 09:33:48.502909 1474687 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 09:33:48.505797 1474687 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:33:48.509406 1474687 config.go:182] Loaded profile config "embed-certs-559379": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:33:48.509526 1474687 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:33:48.535975 1474687 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 09:33:48.536112 1474687 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:33:48.592438 1474687 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 09:33:48.582553625 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:33:48.592547 1474687 docker.go:318] overlay module found
	I1018 09:33:48.595664 1474687 out.go:179] * Using the docker driver based on user configuration
	I1018 09:33:48.598528 1474687 start.go:305] selected driver: docker
	I1018 09:33:48.598550 1474687 start.go:925] validating driver "docker" against <nil>
	I1018 09:33:48.598578 1474687 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:33:48.599294 1474687 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:33:48.656625 1474687 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 09:33:48.647322249 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:33:48.656786 1474687 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 09:33:48.657044 1474687 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:33:48.659909 1474687 out.go:179] * Using Docker driver with root privileges
	I1018 09:33:48.662793 1474687 cni.go:84] Creating CNI manager for ""
	I1018 09:33:48.662856 1474687 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:33:48.662871 1474687 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 09:33:48.662951 1474687 start.go:349] cluster config:
	{Name:default-k8s-diff-port-593480 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-593480 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:33:48.665911 1474687 out.go:179] * Starting "default-k8s-diff-port-593480" primary control-plane node in "default-k8s-diff-port-593480" cluster
	I1018 09:33:48.668758 1474687 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:33:48.671805 1474687 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:33:48.674733 1474687 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:33:48.674803 1474687 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:33:48.674840 1474687 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 09:33:48.674853 1474687 cache.go:58] Caching tarball of preloaded images
	I1018 09:33:48.674936 1474687 preload.go:233] Found /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 09:33:48.674950 1474687 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 09:33:48.675048 1474687 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/config.json ...
	I1018 09:33:48.675075 1474687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/config.json: {Name:mkc696d21d44298ffc51a77d02ca52630b2fec37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:33:48.692499 1474687 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:33:48.692522 1474687 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:33:48.692533 1474687 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:33:48.692555 1474687 start.go:360] acquireMachinesLock for default-k8s-diff-port-593480: {Name:mk139126e1ddb766657a5fd510c1f904e5550412 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:33:48.692656 1474687 start.go:364] duration metric: took 80.983µs to acquireMachinesLock for "default-k8s-diff-port-593480"
	I1018 09:33:48.692687 1474687 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-593480 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-593480 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:33:48.692759 1474687 start.go:125] createHost starting for "" (driver="docker")
	W1018 09:33:45.278635 1471064 pod_ready.go:104] pod "coredns-66bc5c9577-t9blq" is not "Ready", error: <nil>
	W1018 09:33:47.762342 1471064 pod_ready.go:104] pod "coredns-66bc5c9577-t9blq" is not "Ready", error: <nil>
	I1018 09:33:48.699035 1474687 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 09:33:48.699272 1474687 start.go:159] libmachine.API.Create for "default-k8s-diff-port-593480" (driver="docker")
	I1018 09:33:48.699314 1474687 client.go:168] LocalClient.Create starting
	I1018 09:33:48.699382 1474687 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem
	I1018 09:33:48.699414 1474687 main.go:141] libmachine: Decoding PEM data...
	I1018 09:33:48.699432 1474687 main.go:141] libmachine: Parsing certificate...
	I1018 09:33:48.699577 1474687 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem
	I1018 09:33:48.699624 1474687 main.go:141] libmachine: Decoding PEM data...
	I1018 09:33:48.699641 1474687 main.go:141] libmachine: Parsing certificate...
	I1018 09:33:48.700044 1474687 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-593480 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 09:33:48.718591 1474687 cli_runner.go:211] docker network inspect default-k8s-diff-port-593480 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 09:33:48.718680 1474687 network_create.go:284] running [docker network inspect default-k8s-diff-port-593480] to gather additional debugging logs...
	I1018 09:33:48.718703 1474687 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-593480
	W1018 09:33:48.735448 1474687 cli_runner.go:211] docker network inspect default-k8s-diff-port-593480 returned with exit code 1
	I1018 09:33:48.735478 1474687 network_create.go:287] error running [docker network inspect default-k8s-diff-port-593480]: docker network inspect default-k8s-diff-port-593480: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-593480 not found
	I1018 09:33:48.735491 1474687 network_create.go:289] output of [docker network inspect default-k8s-diff-port-593480]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-593480 not found
	
	** /stderr **
	I1018 09:33:48.735607 1474687 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:33:48.754166 1474687 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-521f8f572997 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3a:7e:e5:c0:67:29} reservation:<nil>}
	I1018 09:33:48.754540 1474687 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0b81e76c4e4f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ae:bf:e8:f1:22:c8} reservation:<nil>}
	I1018 09:33:48.754882 1474687 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-41e3e621447e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:62:fc:17:ff:cd:8c} reservation:<nil>}
	I1018 09:33:48.755165 1474687 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c6157e554c85 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:e6:ce:1a:e4:53:ce} reservation:<nil>}
	I1018 09:33:48.755599 1474687 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d5540}
	I1018 09:33:48.755621 1474687 network_create.go:124] attempt to create docker network default-k8s-diff-port-593480 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1018 09:33:48.755685 1474687 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-593480 default-k8s-diff-port-593480
	I1018 09:33:48.816762 1474687 network_create.go:108] docker network default-k8s-diff-port-593480 192.168.85.0/24 created
	I1018 09:33:48.816794 1474687 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-593480" container
	I1018 09:33:48.816865 1474687 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 09:33:48.832427 1474687 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-593480 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-593480 --label created_by.minikube.sigs.k8s.io=true
	I1018 09:33:48.849252 1474687 oci.go:103] Successfully created a docker volume default-k8s-diff-port-593480
	I1018 09:33:48.849348 1474687 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-593480-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-593480 --entrypoint /usr/bin/test -v default-k8s-diff-port-593480:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 09:33:49.451205 1474687 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-593480
	I1018 09:33:49.451253 1474687 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:33:49.451273 1474687 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 09:33:49.451359 1474687 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-593480:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1018 09:33:50.262152 1471064 pod_ready.go:104] pod "coredns-66bc5c9577-t9blq" is not "Ready", error: <nil>
	W1018 09:33:52.761580 1471064 pod_ready.go:104] pod "coredns-66bc5c9577-t9blq" is not "Ready", error: <nil>
	W1018 09:33:54.763058 1471064 pod_ready.go:104] pod "coredns-66bc5c9577-t9blq" is not "Ready", error: <nil>
	I1018 09:33:56.261132 1471064 pod_ready.go:94] pod "coredns-66bc5c9577-t9blq" is "Ready"
	I1018 09:33:56.261245 1471064 pod_ready.go:86] duration metric: took 36.506018592s for pod "coredns-66bc5c9577-t9blq" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:56.264017 1471064 pod_ready.go:83] waiting for pod "etcd-embed-certs-559379" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:56.270004 1471064 pod_ready.go:94] pod "etcd-embed-certs-559379" is "Ready"
	I1018 09:33:56.270032 1471064 pod_ready.go:86] duration metric: took 5.985203ms for pod "etcd-embed-certs-559379" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:56.272546 1471064 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-559379" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:56.279762 1471064 pod_ready.go:94] pod "kube-apiserver-embed-certs-559379" is "Ready"
	I1018 09:33:56.279879 1471064 pod_ready.go:86] duration metric: took 7.308145ms for pod "kube-apiserver-embed-certs-559379" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:56.283320 1471064 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-559379" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:56.460143 1471064 pod_ready.go:94] pod "kube-controller-manager-embed-certs-559379" is "Ready"
	I1018 09:33:56.460218 1471064 pod_ready.go:86] duration metric: took 176.872545ms for pod "kube-controller-manager-embed-certs-559379" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:56.659292 1471064 pod_ready.go:83] waiting for pod "kube-proxy-82pzn" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:57.059218 1471064 pod_ready.go:94] pod "kube-proxy-82pzn" is "Ready"
	I1018 09:33:57.059244 1471064 pod_ready.go:86] duration metric: took 399.924372ms for pod "kube-proxy-82pzn" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:57.259090 1471064 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-559379" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:57.659100 1471064 pod_ready.go:94] pod "kube-scheduler-embed-certs-559379" is "Ready"
	I1018 09:33:57.659128 1471064 pod_ready.go:86] duration metric: took 399.965791ms for pod "kube-scheduler-embed-certs-559379" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:33:57.659142 1471064 pod_ready.go:40] duration metric: took 37.909220937s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:33:57.712479 1471064 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 09:33:57.715740 1471064 out.go:179] * Done! kubectl is now configured to use "embed-certs-559379" cluster and "default" namespace by default
	I1018 09:33:54.369952 1474687 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-593480:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.918532874s)
	I1018 09:33:54.369987 1474687 kic.go:203] duration metric: took 4.918710625s to extract preloaded images to volume ...
	W1018 09:33:54.370116 1474687 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 09:33:54.370227 1474687 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 09:33:54.424366 1474687 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-593480 --name default-k8s-diff-port-593480 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-593480 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-593480 --network default-k8s-diff-port-593480 --ip 192.168.85.2 --volume default-k8s-diff-port-593480:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 09:33:54.719148 1474687 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-593480 --format={{.State.Running}}
	I1018 09:33:54.743583 1474687 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-593480 --format={{.State.Status}}
	I1018 09:33:54.774304 1474687 cli_runner.go:164] Run: docker exec default-k8s-diff-port-593480 stat /var/lib/dpkg/alternatives/iptables
	I1018 09:33:54.833859 1474687 oci.go:144] the created container "default-k8s-diff-port-593480" has a running status.
	I1018 09:33:54.833894 1474687 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/default-k8s-diff-port-593480/id_rsa...
	I1018 09:33:54.932276 1474687 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/default-k8s-diff-port-593480/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 09:33:54.969701 1474687 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-593480 --format={{.State.Status}}
	I1018 09:33:54.989629 1474687 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 09:33:54.989653 1474687 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-593480 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 09:33:55.044563 1474687 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-593480 --format={{.State.Status}}
	I1018 09:33:55.066007 1474687 machine.go:93] provisionDockerMachine start ...
	I1018 09:33:55.066181 1474687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-593480
	I1018 09:33:55.105060 1474687 main.go:141] libmachine: Using SSH client type: native
	I1018 09:33:55.105506 1474687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34901 <nil> <nil>}
	I1018 09:33:55.105522 1474687 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:33:55.106552 1474687 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53442->127.0.0.1:34901: read: connection reset by peer
	I1018 09:33:58.255268 1474687 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-593480
	
	I1018 09:33:58.255294 1474687 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-593480"
	I1018 09:33:58.255364 1474687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-593480
	I1018 09:33:58.272059 1474687 main.go:141] libmachine: Using SSH client type: native
	I1018 09:33:58.272405 1474687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34901 <nil> <nil>}
	I1018 09:33:58.272425 1474687 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-593480 && echo "default-k8s-diff-port-593480" | sudo tee /etc/hostname
	I1018 09:33:58.429327 1474687 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-593480
	
	I1018 09:33:58.429405 1474687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-593480
	I1018 09:33:58.448156 1474687 main.go:141] libmachine: Using SSH client type: native
	I1018 09:33:58.448468 1474687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34901 <nil> <nil>}
	I1018 09:33:58.448491 1474687 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-593480' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-593480/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-593480' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:33:58.596037 1474687 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:33:58.596106 1474687 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-1274243/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-1274243/.minikube}
	I1018 09:33:58.596145 1474687 ubuntu.go:190] setting up certificates
	I1018 09:33:58.596184 1474687 provision.go:84] configureAuth start
	I1018 09:33:58.596291 1474687 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-593480
	I1018 09:33:58.612921 1474687 provision.go:143] copyHostCerts
	I1018 09:33:58.612981 1474687 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem, removing ...
	I1018 09:33:58.613002 1474687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem
	I1018 09:33:58.613088 1474687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem (1123 bytes)
	I1018 09:33:58.613181 1474687 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem, removing ...
	I1018 09:33:58.613186 1474687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem
	I1018 09:33:58.613213 1474687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem (1675 bytes)
	I1018 09:33:58.613270 1474687 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem, removing ...
	I1018 09:33:58.613275 1474687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem
	I1018 09:33:58.613299 1474687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem (1078 bytes)
	I1018 09:33:58.613353 1474687 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-593480 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-593480 localhost minikube]
	I1018 09:33:59.716366 1474687 provision.go:177] copyRemoteCerts
	I1018 09:33:59.716439 1474687 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:33:59.716491 1474687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-593480
	I1018 09:33:59.736989 1474687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34901 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/default-k8s-diff-port-593480/id_rsa Username:docker}
	I1018 09:33:59.839356 1474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:33:59.856144 1474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:33:59.873730 1474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1018 09:33:59.890413 1474687 provision.go:87] duration metric: took 1.294193706s to configureAuth
	I1018 09:33:59.890479 1474687 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:33:59.890682 1474687 config.go:182] Loaded profile config "default-k8s-diff-port-593480": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:33:59.890792 1474687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-593480
	I1018 09:33:59.908851 1474687 main.go:141] libmachine: Using SSH client type: native
	I1018 09:33:59.909170 1474687 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34901 <nil> <nil>}
	I1018 09:33:59.909190 1474687 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:34:00.489958 1474687 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:34:00.489983 1474687 machine.go:96] duration metric: took 5.423958334s to provisionDockerMachine
	I1018 09:34:00.489993 1474687 client.go:171] duration metric: took 11.790672683s to LocalClient.Create
	I1018 09:34:00.490009 1474687 start.go:167] duration metric: took 11.790738059s to libmachine.API.Create "default-k8s-diff-port-593480"
	I1018 09:34:00.490016 1474687 start.go:293] postStartSetup for "default-k8s-diff-port-593480" (driver="docker")
	I1018 09:34:00.490027 1474687 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:34:00.490119 1474687 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:34:00.490170 1474687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-593480
	I1018 09:34:00.509763 1474687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34901 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/default-k8s-diff-port-593480/id_rsa Username:docker}
	I1018 09:34:00.615943 1474687 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:34:00.619388 1474687 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:34:00.619417 1474687 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:34:00.619428 1474687 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-1274243/.minikube/addons for local assets ...
	I1018 09:34:00.619484 1474687 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-1274243/.minikube/files for local assets ...
	I1018 09:34:00.619575 1474687 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem -> 12760972.pem in /etc/ssl/certs
	I1018 09:34:00.619682 1474687 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:34:00.627739 1474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem --> /etc/ssl/certs/12760972.pem (1708 bytes)
	I1018 09:34:00.646371 1474687 start.go:296] duration metric: took 156.338577ms for postStartSetup
	I1018 09:34:00.646772 1474687 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-593480
	I1018 09:34:00.663893 1474687 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/config.json ...
	I1018 09:34:00.664173 1474687 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:34:00.664271 1474687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-593480
	I1018 09:34:00.681468 1474687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34901 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/default-k8s-diff-port-593480/id_rsa Username:docker}
	I1018 09:34:00.780989 1474687 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:34:00.785796 1474687 start.go:128] duration metric: took 12.093022813s to createHost
	I1018 09:34:00.785819 1474687 start.go:83] releasing machines lock for "default-k8s-diff-port-593480", held for 12.093148882s
	I1018 09:34:00.785891 1474687 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-593480
	I1018 09:34:00.802448 1474687 ssh_runner.go:195] Run: cat /version.json
	I1018 09:34:00.802503 1474687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-593480
	I1018 09:34:00.802518 1474687 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:34:00.802592 1474687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-593480
	I1018 09:34:00.826736 1474687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34901 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/default-k8s-diff-port-593480/id_rsa Username:docker}
	I1018 09:34:00.828165 1474687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34901 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/default-k8s-diff-port-593480/id_rsa Username:docker}
	I1018 09:34:01.026900 1474687 ssh_runner.go:195] Run: systemctl --version
	I1018 09:34:01.033345 1474687 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:34:01.070530 1474687 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:34:01.074974 1474687 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:34:01.075098 1474687 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:34:01.108086 1474687 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 09:34:01.108162 1474687 start.go:495] detecting cgroup driver to use...
	I1018 09:34:01.108212 1474687 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 09:34:01.108294 1474687 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:34:01.128625 1474687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:34:01.147064 1474687 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:34:01.147152 1474687 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:34:01.167759 1474687 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:34:01.188987 1474687 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:34:01.312997 1474687 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:34:01.433183 1474687 docker.go:234] disabling docker service ...
	I1018 09:34:01.433289 1474687 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:34:01.454484 1474687 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:34:01.468560 1474687 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:34:01.587887 1474687 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:34:01.706280 1474687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:34:01.719380 1474687 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:34:01.738080 1474687 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:34:01.738190 1474687 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:34:01.747546 1474687 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 09:34:01.747645 1474687 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:34:01.757351 1474687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:34:01.766731 1474687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:34:01.776298 1474687 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:34:01.786913 1474687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:34:01.796438 1474687 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:34:01.812847 1474687 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:34:01.822611 1474687 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:34:01.833645 1474687 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:34:01.841616 1474687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:34:01.987862 1474687 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:34:02.155595 1474687 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:34:02.155890 1474687 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:34:02.165907 1474687 start.go:563] Will wait 60s for crictl version
	I1018 09:34:02.165978 1474687 ssh_runner.go:195] Run: which crictl
	I1018 09:34:02.169957 1474687 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:34:02.197691 1474687 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:34:02.197783 1474687 ssh_runner.go:195] Run: crio --version
	I1018 09:34:02.237603 1474687 ssh_runner.go:195] Run: crio --version
	I1018 09:34:02.277412 1474687 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:34:02.280244 1474687 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-593480 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:34:02.297438 1474687 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 09:34:02.302151 1474687 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:34:02.312443 1474687 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-593480 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-593480 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:34:02.312568 1474687 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:34:02.312624 1474687 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:34:02.353095 1474687 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:34:02.353120 1474687 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:34:02.353177 1474687 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:34:02.382401 1474687 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:34:02.382427 1474687 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:34:02.382437 1474687 kubeadm.go:934] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1018 09:34:02.382529 1474687 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-593480 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-593480 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:34:02.382618 1474687 ssh_runner.go:195] Run: crio config
	I1018 09:34:02.449689 1474687 cni.go:84] Creating CNI manager for ""
	I1018 09:34:02.449717 1474687 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:34:02.449733 1474687 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:34:02.449777 1474687 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-593480 NodeName:default-k8s-diff-port-593480 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:34:02.449940 1474687 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-593480"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:34:02.450017 1474687 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:34:02.458772 1474687 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:34:02.458849 1474687 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:34:02.467391 1474687 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1018 09:34:02.480233 1474687 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:34:02.493101 1474687 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1018 09:34:02.506730 1474687 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:34:02.510436 1474687 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:34:02.520320 1474687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:34:02.632848 1474687 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:34:02.652548 1474687 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480 for IP: 192.168.85.2
	I1018 09:34:02.652632 1474687 certs.go:195] generating shared ca certs ...
	I1018 09:34:02.652664 1474687 certs.go:227] acquiring lock for ca certs: {Name:mk38b7543cc5a8f0209b20a590824c94ad8d0609 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:34:02.652848 1474687 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.key
	I1018 09:34:02.652937 1474687 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.key
	I1018 09:34:02.652971 1474687 certs.go:257] generating profile certs ...
	I1018 09:34:02.653047 1474687 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/client.key
	I1018 09:34:02.653084 1474687 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/client.crt with IP's: []
	I1018 09:34:03.075274 1474687 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/client.crt ...
	I1018 09:34:03.075308 1474687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/client.crt: {Name:mk353702f41496c5887bc703787e83a6b9652bc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:34:03.075556 1474687 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/client.key ...
	I1018 09:34:03.075572 1474687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/client.key: {Name:mk2dab06626ab6977b325fa4ad2d3ba5fcae2043 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:34:03.075729 1474687 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/apiserver.key.3ec3eca5
	I1018 09:34:03.075758 1474687 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/apiserver.crt.3ec3eca5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1018 09:34:03.585561 1474687 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/apiserver.crt.3ec3eca5 ...
	I1018 09:34:03.585593 1474687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/apiserver.crt.3ec3eca5: {Name:mk9b711777f0503a3fada68d61b8c155c50a057a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:34:03.585778 1474687 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/apiserver.key.3ec3eca5 ...
	I1018 09:34:03.585799 1474687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/apiserver.key.3ec3eca5: {Name:mka3f2c262afdf16e83da78076b3f91571280d44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:34:03.585890 1474687 certs.go:382] copying /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/apiserver.crt.3ec3eca5 -> /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/apiserver.crt
	I1018 09:34:03.585981 1474687 certs.go:386] copying /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/apiserver.key.3ec3eca5 -> /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/apiserver.key
	I1018 09:34:03.586043 1474687 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/proxy-client.key
	I1018 09:34:03.586063 1474687 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/proxy-client.crt with IP's: []
	I1018 09:34:04.154396 1474687 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/proxy-client.crt ...
	I1018 09:34:04.154428 1474687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/proxy-client.crt: {Name:mk34b9ba2e1ebf0c882833d7b8f1337797571013 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:34:04.154612 1474687 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/proxy-client.key ...
	I1018 09:34:04.154626 1474687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/proxy-client.key: {Name:mkd1feca4d14672ebd2446f2b1978df69cd0a9ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:34:04.154801 1474687 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097.pem (1338 bytes)
	W1018 09:34:04.154846 1474687 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097_empty.pem, impossibly tiny 0 bytes
	I1018 09:34:04.154860 1474687 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:34:04.154885 1474687 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:34:04.154911 1474687 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:34:04.154936 1474687 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem (1675 bytes)
	I1018 09:34:04.154986 1474687 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem (1708 bytes)
	I1018 09:34:04.155582 1474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:34:04.179806 1474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 09:34:04.199895 1474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:34:04.224489 1474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:34:04.241381 1474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1018 09:34:04.259021 1474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 09:34:04.276943 1474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:34:04.294159 1474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 09:34:04.311408 1474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097.pem --> /usr/share/ca-certificates/1276097.pem (1338 bytes)
	I1018 09:34:04.329620 1474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem --> /usr/share/ca-certificates/12760972.pem (1708 bytes)
	I1018 09:34:04.346631 1474687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:34:04.365114 1474687 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:34:04.378366 1474687 ssh_runner.go:195] Run: openssl version
	I1018 09:34:04.386494 1474687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1276097.pem && ln -fs /usr/share/ca-certificates/1276097.pem /etc/ssl/certs/1276097.pem"
	I1018 09:34:04.395831 1474687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1276097.pem
	I1018 09:34:04.399558 1474687 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 08:36 /usr/share/ca-certificates/1276097.pem
	I1018 09:34:04.399622 1474687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1276097.pem
	I1018 09:34:04.441473 1474687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1276097.pem /etc/ssl/certs/51391683.0"
	I1018 09:34:04.449715 1474687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12760972.pem && ln -fs /usr/share/ca-certificates/12760972.pem /etc/ssl/certs/12760972.pem"
	I1018 09:34:04.458537 1474687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12760972.pem
	I1018 09:34:04.462390 1474687 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 08:36 /usr/share/ca-certificates/12760972.pem
	I1018 09:34:04.462449 1474687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12760972.pem
	I1018 09:34:04.503647 1474687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12760972.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:34:04.511916 1474687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:34:04.520139 1474687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:34:04.523812 1474687 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:34:04.523964 1474687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:34:04.565171 1474687 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:34:04.573454 1474687 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:34:04.576740 1474687 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 09:34:04.576790 1474687 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-593480 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-593480 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:34:04.576872 1474687 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:34:04.576943 1474687 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:34:04.603451 1474687 cri.go:89] found id: ""
	I1018 09:34:04.603523 1474687 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:34:04.611653 1474687 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 09:34:04.619263 1474687 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 09:34:04.619329 1474687 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 09:34:04.626951 1474687 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 09:34:04.626974 1474687 kubeadm.go:157] found existing configuration files:
	
	I1018 09:34:04.627022 1474687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1018 09:34:04.634542 1474687 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 09:34:04.634615 1474687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 09:34:04.642402 1474687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1018 09:34:04.650009 1474687 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 09:34:04.650073 1474687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 09:34:04.657509 1474687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1018 09:34:04.665316 1474687 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 09:34:04.665380 1474687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 09:34:04.672588 1474687 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1018 09:34:04.679967 1474687 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 09:34:04.680034 1474687 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 09:34:04.687260 1474687 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 09:34:04.724214 1474687 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 09:34:04.724282 1474687 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 09:34:04.746859 1474687 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 09:34:04.746939 1474687 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 09:34:04.746981 1474687 kubeadm.go:318] OS: Linux
	I1018 09:34:04.747033 1474687 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 09:34:04.747090 1474687 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1018 09:34:04.747147 1474687 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 09:34:04.747208 1474687 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 09:34:04.747271 1474687 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 09:34:04.747323 1474687 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 09:34:04.747375 1474687 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 09:34:04.747428 1474687 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 09:34:04.747481 1474687 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1018 09:34:04.816475 1474687 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 09:34:04.816606 1474687 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 09:34:04.816712 1474687 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 09:34:04.824199 1474687 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 09:34:04.829985 1474687 out.go:252]   - Generating certificates and keys ...
	I1018 09:34:04.830097 1474687 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 09:34:04.830180 1474687 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 09:34:05.279691 1474687 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 09:34:06.592986 1474687 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 09:34:07.095610 1474687 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 09:34:07.353562 1474687 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 09:34:07.555523 1474687 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 09:34:07.555716 1474687 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-593480 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 09:34:07.879873 1474687 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 09:34:07.880299 1474687 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-593480 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 09:34:08.315676 1474687 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 09:34:08.598375 1474687 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 09:34:09.000621 1474687 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 09:34:09.000956 1474687 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 09:34:10.163835 1474687 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 09:34:10.976789 1474687 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 09:34:11.245888 1474687 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 09:34:11.707525 1474687 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 09:34:11.853269 1474687 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 09:34:11.854230 1474687 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 09:34:11.858288 1474687 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 09:34:11.861625 1474687 out.go:252]   - Booting up control plane ...
	I1018 09:34:11.861738 1474687 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 09:34:11.861829 1474687 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 09:34:11.861899 1474687 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 09:34:11.878829 1474687 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 09:34:11.879253 1474687 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 09:34:11.888403 1474687 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 09:34:11.890442 1474687 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 09:34:11.890811 1474687 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 09:34:12.076683 1474687 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 09:34:12.077303 1474687 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	
	
	==> CRI-O <==
	Oct 18 09:33:49 embed-certs-559379 crio[656]: time="2025-10-18T09:33:49.435782987Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b6340802-64fa-40dd-b958-96c9c8582e60 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:33:49 embed-certs-559379 crio[656]: time="2025-10-18T09:33:49.437598149Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=14609d96-96b8-4bee-9423-83397399e27b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:33:49 embed-certs-559379 crio[656]: time="2025-10-18T09:33:49.437844Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:33:49 embed-certs-559379 crio[656]: time="2025-10-18T09:33:49.443893842Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:33:49 embed-certs-559379 crio[656]: time="2025-10-18T09:33:49.444077697Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/56f4dec220a495d8ffedf9771d7f3c362bcf37fd7737ac616afe99ca6a81ac9b/merged/etc/passwd: no such file or directory"
	Oct 18 09:33:49 embed-certs-559379 crio[656]: time="2025-10-18T09:33:49.444100999Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/56f4dec220a495d8ffedf9771d7f3c362bcf37fd7737ac616afe99ca6a81ac9b/merged/etc/group: no such file or directory"
	Oct 18 09:33:49 embed-certs-559379 crio[656]: time="2025-10-18T09:33:49.444374928Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:33:49 embed-certs-559379 crio[656]: time="2025-10-18T09:33:49.474984012Z" level=info msg="Created container 5de25f73c22dded133ad60394a22310d0c8b5f7494398d4c19b2a0b733701d76: kube-system/storage-provisioner/storage-provisioner" id=14609d96-96b8-4bee-9423-83397399e27b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:33:49 embed-certs-559379 crio[656]: time="2025-10-18T09:33:49.476280689Z" level=info msg="Starting container: 5de25f73c22dded133ad60394a22310d0c8b5f7494398d4c19b2a0b733701d76" id=29a7dfe5-ff16-44b1-b330-de6fe24c47ef name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:33:49 embed-certs-559379 crio[656]: time="2025-10-18T09:33:49.479833124Z" level=info msg="Started container" PID=1651 containerID=5de25f73c22dded133ad60394a22310d0c8b5f7494398d4c19b2a0b733701d76 description=kube-system/storage-provisioner/storage-provisioner id=29a7dfe5-ff16-44b1-b330-de6fe24c47ef name=/runtime.v1.RuntimeService/StartContainer sandboxID=b8a98d5b13c936582e07af1e06ee31df1d56e0c72c0413ecc79c2747e9d3a2cc
	Oct 18 09:33:59 embed-certs-559379 crio[656]: time="2025-10-18T09:33:59.221886812Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:33:59 embed-certs-559379 crio[656]: time="2025-10-18T09:33:59.236142136Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:33:59 embed-certs-559379 crio[656]: time="2025-10-18T09:33:59.236324531Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:33:59 embed-certs-559379 crio[656]: time="2025-10-18T09:33:59.236397956Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:33:59 embed-certs-559379 crio[656]: time="2025-10-18T09:33:59.24923275Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:33:59 embed-certs-559379 crio[656]: time="2025-10-18T09:33:59.249767881Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:33:59 embed-certs-559379 crio[656]: time="2025-10-18T09:33:59.250040038Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:33:59 embed-certs-559379 crio[656]: time="2025-10-18T09:33:59.268143829Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:33:59 embed-certs-559379 crio[656]: time="2025-10-18T09:33:59.268328029Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:33:59 embed-certs-559379 crio[656]: time="2025-10-18T09:33:59.268413507Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:33:59 embed-certs-559379 crio[656]: time="2025-10-18T09:33:59.280221474Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:33:59 embed-certs-559379 crio[656]: time="2025-10-18T09:33:59.28040583Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:33:59 embed-certs-559379 crio[656]: time="2025-10-18T09:33:59.280499349Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:33:59 embed-certs-559379 crio[656]: time="2025-10-18T09:33:59.288191196Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:33:59 embed-certs-559379 crio[656]: time="2025-10-18T09:33:59.288358141Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	5de25f73c22dd       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           26 seconds ago       Running             storage-provisioner         2                   b8a98d5b13c93       storage-provisioner                          kube-system
	684cd6eff48e4       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           31 seconds ago       Exited              dashboard-metrics-scraper   2                   2b5e648e0b54c       dashboard-metrics-scraper-6ffb444bf9-s9n4f   kubernetes-dashboard
	0e3ede05f52a8       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   45 seconds ago       Running             kubernetes-dashboard        0                   ae28d7cca8a0b       kubernetes-dashboard-855c9754f9-d75lm        kubernetes-dashboard
	0024b85e5ef70       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           57 seconds ago       Running             coredns                     1                   6989432ac6f21       coredns-66bc5c9577-t9blq                     kube-system
	04f31e69246a0       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           57 seconds ago       Running             busybox                     1                   2ed8bf9d6a331       busybox                                      default
	f33068c7b92cd       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           57 seconds ago       Running             kindnet-cni                 1                   543899513fc98       kindnet-6ltrq                                kube-system
	c1528bdaee222       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           57 seconds ago       Running             kube-proxy                  1                   fe6cd3b1aa39e       kube-proxy-82pzn                             kube-system
	90117d3668eec       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           57 seconds ago       Exited              storage-provisioner         1                   b8a98d5b13c93       storage-provisioner                          kube-system
	107f423c47421       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   530278d08635d       etcd-embed-certs-559379                      kube-system
	9e4bebc346e34       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   718e0bf2093a3       kube-controller-manager-embed-certs-559379   kube-system
	1fa42435e829f       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   9ddf0b97b5da7       kube-apiserver-embed-certs-559379            kube-system
	836750ba87758       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   9762eb5edde86       kube-scheduler-embed-certs-559379            kube-system
	
	
	==> coredns [0024b85e5ef70b9bf4ed4dae2ee9734cd4169e59884bd9f20cf0fa1d53ab8b4d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48773 - 2999 "HINFO IN 7994601189480743672.2215154079161539457. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014071895s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-559379
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-559379
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820
	                    minikube.k8s.io/name=embed-certs-559379
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_31_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:31:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-559379
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:34:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:33:48 +0000   Sat, 18 Oct 2025 09:31:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:33:48 +0000   Sat, 18 Oct 2025 09:31:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:33:48 +0000   Sat, 18 Oct 2025 09:31:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:33:48 +0000   Sat, 18 Oct 2025 09:32:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-559379
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                963b98db-af62-4b5f-9ed9-d04f81062030
	  Boot ID:                    65523ab2-bf15-4ba4-9086-da57024e96a9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-66bc5c9577-t9blq                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m22s
	  kube-system                 etcd-embed-certs-559379                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m29s
	  kube-system                 kindnet-6ltrq                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m22s
	  kube-system                 kube-apiserver-embed-certs-559379             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 kube-controller-manager-embed-certs-559379    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-proxy-82pzn                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-scheduler-embed-certs-559379             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-s9n4f    0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-d75lm         0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m20s                  kube-proxy       
	  Normal   Starting                 56s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m39s (x8 over 2m39s)  kubelet          Node embed-certs-559379 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m39s (x8 over 2m39s)  kubelet          Node embed-certs-559379 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m39s (x8 over 2m39s)  kubelet          Node embed-certs-559379 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m28s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m28s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m27s                  kubelet          Node embed-certs-559379 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m27s                  kubelet          Node embed-certs-559379 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m27s                  kubelet          Node embed-certs-559379 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m23s                  node-controller  Node embed-certs-559379 event: Registered Node embed-certs-559379 in Controller
	  Normal   NodeReady                100s                   kubelet          Node embed-certs-559379 status is now: NodeReady
	  Normal   Starting                 64s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 64s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  64s (x8 over 64s)      kubelet          Node embed-certs-559379 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    64s (x8 over 64s)      kubelet          Node embed-certs-559379 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     64s (x8 over 64s)      kubelet          Node embed-certs-559379 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                    node-controller  Node embed-certs-559379 event: Registered Node embed-certs-559379 in Controller
	
	
	==> dmesg <==
	[Oct18 09:12] overlayfs: idmapped layers are currently not supported
	[Oct18 09:13] overlayfs: idmapped layers are currently not supported
	[  +9.741593] overlayfs: idmapped layers are currently not supported
	[Oct18 09:14] overlayfs: idmapped layers are currently not supported
	[ +29.155522] overlayfs: idmapped layers are currently not supported
	[Oct18 09:15] overlayfs: idmapped layers are currently not supported
	[ +11.661984] kauditd_printk_skb: 8 callbacks suppressed
	[ +12.494912] overlayfs: idmapped layers are currently not supported
	[Oct18 09:16] overlayfs: idmapped layers are currently not supported
	[Oct18 09:18] overlayfs: idmapped layers are currently not supported
	[Oct18 09:20] overlayfs: idmapped layers are currently not supported
	[  +1.396393] overlayfs: idmapped layers are currently not supported
	[Oct18 09:22] overlayfs: idmapped layers are currently not supported
	[ +27.782946] overlayfs: idmapped layers are currently not supported
	[Oct18 09:25] overlayfs: idmapped layers are currently not supported
	[Oct18 09:26] overlayfs: idmapped layers are currently not supported
	[Oct18 09:27] overlayfs: idmapped layers are currently not supported
	[Oct18 09:28] overlayfs: idmapped layers are currently not supported
	[ +34.309418] overlayfs: idmapped layers are currently not supported
	[Oct18 09:30] overlayfs: idmapped layers are currently not supported
	[Oct18 09:31] overlayfs: idmapped layers are currently not supported
	[ +30.479187] overlayfs: idmapped layers are currently not supported
	[Oct18 09:32] overlayfs: idmapped layers are currently not supported
	[Oct18 09:33] overlayfs: idmapped layers are currently not supported
	[Oct18 09:34] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [107f423c474214a77b70bea579d8693f96941573e52099aab36ca04cad80b9fb] <==
	{"level":"warn","ts":"2025-10-18T09:33:15.943714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:33:15.995898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:33:16.002650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:33:16.020344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:33:16.056220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:33:16.058396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:33:16.076746Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:33:16.104302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:33:16.129205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:33:16.145265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:33:16.170975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:33:16.180504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:33:16.210688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:33:16.219250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:33:16.236524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:33:16.259152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:33:16.271635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:33:16.289855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:33:16.308321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:33:16.339968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:33:16.391814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:33:16.419111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:33:16.443502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:33:16.461519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:33:16.532330Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60914","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:34:16 up 11:16,  0 user,  load average: 3.69, 3.33, 2.73
	Linux embed-certs-559379 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f33068c7b92cd4cb6c77a59c4716cbd106f6ac2f31c1ec7071a66ecb90a2813e] <==
	I1018 09:33:19.018294       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:33:19.018495       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 09:33:19.018618       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:33:19.018629       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:33:19.018639       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:33:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:33:19.219735       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:33:19.219811       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:33:19.222777       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:33:19.223751       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 09:33:49.220618       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 09:33:49.224199       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 09:33:49.224356       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 09:33:49.224477       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1018 09:33:50.123986       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:33:50.124108       1 metrics.go:72] Registering metrics
	I1018 09:33:50.124204       1 controller.go:711] "Syncing nftables rules"
	I1018 09:33:59.221570       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:33:59.221621       1 main.go:301] handling current node
	I1018 09:34:09.224618       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:34:09.224672       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1fa42435e829fa1ff7a0af9be9dc7035e7cc16ae52106466d057fafcbaf6e9bb] <==
	I1018 09:33:17.889229       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 09:33:17.889444       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 09:33:17.889473       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 09:33:17.889770       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 09:33:17.889932       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:33:17.889972       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 09:33:17.889991       1 policy_source.go:240] refreshing policies
	I1018 09:33:17.897675       1 aggregator.go:171] initial CRD sync complete...
	I1018 09:33:17.897708       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 09:33:17.897716       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 09:33:17.897722       1 cache.go:39] Caches are synced for autoregister controller
	I1018 09:33:17.904540       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 09:33:17.914071       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1018 09:33:17.942252       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 09:33:18.230992       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 09:33:18.348063       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:33:19.204932       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 09:33:19.323453       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 09:33:19.428164       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:33:19.472183       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:33:19.617285       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.121.5"}
	I1018 09:33:19.642759       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.56.161"}
	I1018 09:33:20.831146       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 09:33:21.387596       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 09:33:21.429146       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [9e4bebc346e34245095acfdc99e4bf27d586ba1008354824cc3842710f552d3d] <==
	I1018 09:33:20.837116       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:33:20.837609       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 09:33:20.843176       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 09:33:20.847493       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 09:33:20.847805       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 09:33:20.848049       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 09:33:20.848069       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 09:33:20.849076       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 09:33:20.851932       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 09:33:20.859333       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 09:33:20.859623       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 09:33:20.859937       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 09:33:20.868173       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 09:33:20.868294       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:33:20.872499       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 09:33:20.873391       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 09:33:20.873430       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 09:33:20.873401       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 09:33:20.877002       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:33:20.877024       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 09:33:20.877036       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 09:33:20.882128       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 09:33:20.885405       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:33:20.892650       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 09:33:21.443613       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [c1528bdaee222f9277110c1d5151cc9bfb3371a213419bb2ef053388848c0a56] <==
	I1018 09:33:19.409025       1 server_linux.go:53] "Using iptables proxy"
	I1018 09:33:19.693319       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 09:33:19.894809       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:33:19.894901       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 09:33:19.895027       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:33:19.949252       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:33:19.949302       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:33:19.954049       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:33:19.954570       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:33:19.954638       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:33:19.958406       1 config.go:200] "Starting service config controller"
	I1018 09:33:19.958427       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:33:19.958440       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:33:19.958444       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:33:19.958452       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:33:19.958460       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:33:19.959086       1 config.go:309] "Starting node config controller"
	I1018 09:33:19.959097       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:33:19.959104       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 09:33:20.058992       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 09:33:20.058998       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 09:33:20.059036       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [836750ba877589f7642d95bcc7eaea0db209e4198f52173d3d62e2a5392defad] <==
	I1018 09:33:16.374175       1 serving.go:386] Generated self-signed cert in-memory
	I1018 09:33:19.877900       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 09:33:19.877997       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:33:19.883492       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 09:33:19.883684       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:33:19.883870       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:33:19.883667       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1018 09:33:19.883956       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1018 09:33:19.883707       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 09:33:19.887949       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 09:33:19.883720       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 09:33:19.984082       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1018 09:33:19.984476       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:33:19.998068       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:33:21 embed-certs-559379 kubelet[782]: I1018 09:33:21.447435     782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdc5b\" (UniqueName: \"kubernetes.io/projected/da0e8792-a12a-47e9-9b51-18561a66da84-kube-api-access-cdc5b\") pod \"dashboard-metrics-scraper-6ffb444bf9-s9n4f\" (UID: \"da0e8792-a12a-47e9-9b51-18561a66da84\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s9n4f"
	Oct 18 09:33:21 embed-certs-559379 kubelet[782]: W1018 09:33:21.666043     782 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/28d5892e22acbb4aee9ab8966787a91744522fa04b863c7570f04701f5c19fa0/crio-2b5e648e0b54c81294b0b4e409d6f75789d59d5f925585a9c73f39f2dd180ba7 WatchSource:0}: Error finding container 2b5e648e0b54c81294b0b4e409d6f75789d59d5f925585a9c73f39f2dd180ba7: Status 404 returned error can't find the container with id 2b5e648e0b54c81294b0b4e409d6f75789d59d5f925585a9c73f39f2dd180ba7
	Oct 18 09:33:21 embed-certs-559379 kubelet[782]: W1018 09:33:21.681208     782 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/28d5892e22acbb4aee9ab8966787a91744522fa04b863c7570f04701f5c19fa0/crio-ae28d7cca8a0b039b6993cf9b09d28cce1b40e1e6e9d08c5c79b42ce64691dc5 WatchSource:0}: Error finding container ae28d7cca8a0b039b6993cf9b09d28cce1b40e1e6e9d08c5c79b42ce64691dc5: Status 404 returned error can't find the container with id ae28d7cca8a0b039b6993cf9b09d28cce1b40e1e6e9d08c5c79b42ce64691dc5
	Oct 18 09:33:25 embed-certs-559379 kubelet[782]: I1018 09:33:25.801548     782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 18 09:33:26 embed-certs-559379 kubelet[782]: I1018 09:33:26.350366     782 scope.go:117] "RemoveContainer" containerID="78906d6e8c9810d0981130a18266a4a011baf6acf5cb43923a0659cb06338721"
	Oct 18 09:33:27 embed-certs-559379 kubelet[782]: I1018 09:33:27.355791     782 scope.go:117] "RemoveContainer" containerID="770e4323a7648d40dfe70f9a3a878d3f6ed519c0dc490df270fc89b69d4a63f1"
	Oct 18 09:33:27 embed-certs-559379 kubelet[782]: E1018 09:33:27.356020     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s9n4f_kubernetes-dashboard(da0e8792-a12a-47e9-9b51-18561a66da84)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s9n4f" podUID="da0e8792-a12a-47e9-9b51-18561a66da84"
	Oct 18 09:33:27 embed-certs-559379 kubelet[782]: I1018 09:33:27.356242     782 scope.go:117] "RemoveContainer" containerID="78906d6e8c9810d0981130a18266a4a011baf6acf5cb43923a0659cb06338721"
	Oct 18 09:33:28 embed-certs-559379 kubelet[782]: I1018 09:33:28.360902     782 scope.go:117] "RemoveContainer" containerID="770e4323a7648d40dfe70f9a3a878d3f6ed519c0dc490df270fc89b69d4a63f1"
	Oct 18 09:33:28 embed-certs-559379 kubelet[782]: E1018 09:33:28.361061     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s9n4f_kubernetes-dashboard(da0e8792-a12a-47e9-9b51-18561a66da84)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s9n4f" podUID="da0e8792-a12a-47e9-9b51-18561a66da84"
	Oct 18 09:33:30 embed-certs-559379 kubelet[782]: I1018 09:33:30.160773     782 scope.go:117] "RemoveContainer" containerID="770e4323a7648d40dfe70f9a3a878d3f6ed519c0dc490df270fc89b69d4a63f1"
	Oct 18 09:33:30 embed-certs-559379 kubelet[782]: E1018 09:33:30.160997     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s9n4f_kubernetes-dashboard(da0e8792-a12a-47e9-9b51-18561a66da84)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s9n4f" podUID="da0e8792-a12a-47e9-9b51-18561a66da84"
	Oct 18 09:33:45 embed-certs-559379 kubelet[782]: I1018 09:33:45.208894     782 scope.go:117] "RemoveContainer" containerID="770e4323a7648d40dfe70f9a3a878d3f6ed519c0dc490df270fc89b69d4a63f1"
	Oct 18 09:33:45 embed-certs-559379 kubelet[782]: I1018 09:33:45.420779     782 scope.go:117] "RemoveContainer" containerID="770e4323a7648d40dfe70f9a3a878d3f6ed519c0dc490df270fc89b69d4a63f1"
	Oct 18 09:33:45 embed-certs-559379 kubelet[782]: I1018 09:33:45.421281     782 scope.go:117] "RemoveContainer" containerID="684cd6eff48e43cc623fa04e58de4efa0f8efaa9dc177f98042f63ff74357b47"
	Oct 18 09:33:45 embed-certs-559379 kubelet[782]: E1018 09:33:45.421501     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s9n4f_kubernetes-dashboard(da0e8792-a12a-47e9-9b51-18561a66da84)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s9n4f" podUID="da0e8792-a12a-47e9-9b51-18561a66da84"
	Oct 18 09:33:45 embed-certs-559379 kubelet[782]: I1018 09:33:45.448679     782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-d75lm" podStartSLOduration=15.35494817 podStartE2EDuration="24.448658734s" podCreationTimestamp="2025-10-18 09:33:21 +0000 UTC" firstStartedPulling="2025-10-18 09:33:21.689587675 +0000 UTC m=+9.669579778" lastFinishedPulling="2025-10-18 09:33:30.783298247 +0000 UTC m=+18.763290342" observedRunningTime="2025-10-18 09:33:31.392839837 +0000 UTC m=+19.372831940" watchObservedRunningTime="2025-10-18 09:33:45.448658734 +0000 UTC m=+33.428650837"
	Oct 18 09:33:49 embed-certs-559379 kubelet[782]: I1018 09:33:49.434367     782 scope.go:117] "RemoveContainer" containerID="90117d3668eec91cac997ca9f7c2efbc2a28365287180d51f10287bfcca9e046"
	Oct 18 09:33:50 embed-certs-559379 kubelet[782]: I1018 09:33:50.161424     782 scope.go:117] "RemoveContainer" containerID="684cd6eff48e43cc623fa04e58de4efa0f8efaa9dc177f98042f63ff74357b47"
	Oct 18 09:33:50 embed-certs-559379 kubelet[782]: E1018 09:33:50.161641     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s9n4f_kubernetes-dashboard(da0e8792-a12a-47e9-9b51-18561a66da84)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s9n4f" podUID="da0e8792-a12a-47e9-9b51-18561a66da84"
	Oct 18 09:34:02 embed-certs-559379 kubelet[782]: I1018 09:34:02.210179     782 scope.go:117] "RemoveContainer" containerID="684cd6eff48e43cc623fa04e58de4efa0f8efaa9dc177f98042f63ff74357b47"
	Oct 18 09:34:02 embed-certs-559379 kubelet[782]: E1018 09:34:02.210863     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s9n4f_kubernetes-dashboard(da0e8792-a12a-47e9-9b51-18561a66da84)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s9n4f" podUID="da0e8792-a12a-47e9-9b51-18561a66da84"
	Oct 18 09:34:10 embed-certs-559379 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 09:34:10 embed-certs-559379 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 09:34:10 embed-certs-559379 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [0e3ede05f52a881ea5d8c2a1b82dd79a395d0f564a7b3fd96fd62e991cc448db] <==
	2025/10/18 09:33:30 Using namespace: kubernetes-dashboard
	2025/10/18 09:33:30 Using in-cluster config to connect to apiserver
	2025/10/18 09:33:30 Using secret token for csrf signing
	2025/10/18 09:33:30 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 09:33:30 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 09:33:30 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 09:33:30 Generating JWE encryption key
	2025/10/18 09:33:30 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 09:33:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 09:33:31 Initializing JWE encryption key from synchronized object
	2025/10/18 09:33:31 Creating in-cluster Sidecar client
	2025/10/18 09:33:31 Serving insecurely on HTTP port: 9090
	2025/10/18 09:33:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 09:34:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 09:33:30 Starting overwatch
	
	
	==> storage-provisioner [5de25f73c22dded133ad60394a22310d0c8b5f7494398d4c19b2a0b733701d76] <==
	I1018 09:33:49.526438       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 09:33:49.526700       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 09:33:49.532214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:33:52.987021       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:33:57.247659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:34:00.845578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:34:03.899361       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:34:06.928365       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:34:06.954014       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:34:06.954285       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 09:34:06.955886       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-559379_90e7dc75-2af5-4f66-b8f9-759888fc9276!
	I1018 09:34:06.956611       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a2540f48-50f2-4174-a7e5-a267c71bfb5e", APIVersion:"v1", ResourceVersion:"694", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-559379_90e7dc75-2af5-4f66-b8f9-759888fc9276 became leader
	W1018 09:34:06.962731       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:34:06.976139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:34:07.056476       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-559379_90e7dc75-2af5-4f66-b8f9-759888fc9276!
	W1018 09:34:08.979758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:34:08.988512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:34:10.992109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:34:10.996781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:34:13.000351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:34:13.014894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:34:15.031045       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:34:15.038451       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:34:17.041425       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:34:17.056720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [90117d3668eec91cac997ca9f7c2efbc2a28365287180d51f10287bfcca9e046] <==
	I1018 09:33:19.059504       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 09:33:49.061222       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-559379 -n embed-certs-559379
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-559379 -n embed-certs-559379: exit status 2 (528.920817ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-559379 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (8.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-250274 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-250274 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (283.169871ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:35:03Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-250274 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-250274
helpers_test.go:243: (dbg) docker inspect newest-cni-250274:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3f010420231a13bf1bf2f85b36b5b332c4bb0a0624b2e0eaeb9d03bc23b53aa4",
	        "Created": "2025-10-18T09:34:27.497200504Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1478722,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:34:27.560745219Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/3f010420231a13bf1bf2f85b36b5b332c4bb0a0624b2e0eaeb9d03bc23b53aa4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3f010420231a13bf1bf2f85b36b5b332c4bb0a0624b2e0eaeb9d03bc23b53aa4/hostname",
	        "HostsPath": "/var/lib/docker/containers/3f010420231a13bf1bf2f85b36b5b332c4bb0a0624b2e0eaeb9d03bc23b53aa4/hosts",
	        "LogPath": "/var/lib/docker/containers/3f010420231a13bf1bf2f85b36b5b332c4bb0a0624b2e0eaeb9d03bc23b53aa4/3f010420231a13bf1bf2f85b36b5b332c4bb0a0624b2e0eaeb9d03bc23b53aa4-json.log",
	        "Name": "/newest-cni-250274",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-250274:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-250274",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3f010420231a13bf1bf2f85b36b5b332c4bb0a0624b2e0eaeb9d03bc23b53aa4",
	                "LowerDir": "/var/lib/docker/overlay2/2fe5ff73b8ff26832abe48ec6c179e857bbcb47b7c7fd14f5b5c2bf3f5188935-init/diff:/var/lib/docker/overlay2/60519750c2737db0dfdb37adf468f6414129c2b5dcb7218ea59afbc63bfd537f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2fe5ff73b8ff26832abe48ec6c179e857bbcb47b7c7fd14f5b5c2bf3f5188935/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2fe5ff73b8ff26832abe48ec6c179e857bbcb47b7c7fd14f5b5c2bf3f5188935/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2fe5ff73b8ff26832abe48ec6c179e857bbcb47b7c7fd14f5b5c2bf3f5188935/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-250274",
	                "Source": "/var/lib/docker/volumes/newest-cni-250274/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-250274",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-250274",
	                "name.minikube.sigs.k8s.io": "newest-cni-250274",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fb479badd07b49ef619c28ba6d3620a58d64b0c884128a12ab9027c941e85175",
	            "SandboxKey": "/var/run/docker/netns/fb479badd07b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34906"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34907"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34910"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34908"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34909"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-250274": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:1a:4f:ca:b9:6b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "804e7137416d690774484eb7cc39c343cbbb64651a610611c9ac627077f5c75f",
	                    "EndpointID": "2884e848d1d05a502131488c40194da663cd6b62eab5e970b12730d33b4d2357",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-250274",
	                        "3f010420231a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-250274 -n newest-cni-250274
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-250274 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-250274 logs -n 25: (1.111792649s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p old-k8s-version-136598 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-136598       │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │                     │
	│ start   │ -p no-preload-886951 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:30 UTC │ 18 Oct 25 09:32 UTC │
	│ delete  │ -p old-k8s-version-136598                                                                                                                                                                                                                     │ old-k8s-version-136598       │ jenkins │ v1.37.0 │ 18 Oct 25 09:31 UTC │ 18 Oct 25 09:31 UTC │
	│ delete  │ -p old-k8s-version-136598                                                                                                                                                                                                                     │ old-k8s-version-136598       │ jenkins │ v1.37.0 │ 18 Oct 25 09:31 UTC │ 18 Oct 25 09:31 UTC │
	│ start   │ -p embed-certs-559379 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:31 UTC │ 18 Oct 25 09:32 UTC │
	│ addons  │ enable metrics-server -p no-preload-886951 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │                     │
	│ stop    │ -p no-preload-886951 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │ 18 Oct 25 09:32 UTC │
	│ addons  │ enable dashboard -p no-preload-886951 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │ 18 Oct 25 09:32 UTC │
	│ start   │ -p no-preload-886951 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │ 18 Oct 25 09:33 UTC │
	│ addons  │ enable metrics-server -p embed-certs-559379 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │                     │
	│ stop    │ -p embed-certs-559379 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │ 18 Oct 25 09:33 UTC │
	│ addons  │ enable dashboard -p embed-certs-559379 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:33 UTC │
	│ start   │ -p embed-certs-559379 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:33 UTC │
	│ image   │ no-preload-886951 image list --format=json                                                                                                                                                                                                    │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:33 UTC │
	│ pause   │ -p no-preload-886951 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │                     │
	│ delete  │ -p no-preload-886951                                                                                                                                                                                                                          │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:33 UTC │
	│ delete  │ -p no-preload-886951                                                                                                                                                                                                                          │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:33 UTC │
	│ delete  │ -p disable-driver-mounts-877810                                                                                                                                                                                                               │ disable-driver-mounts-877810 │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:33 UTC │
	│ start   │ -p default-k8s-diff-port-593480 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-593480 │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │                     │
	│ image   │ embed-certs-559379 image list --format=json                                                                                                                                                                                                   │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │ 18 Oct 25 09:34 UTC │
	│ pause   │ -p embed-certs-559379 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │                     │
	│ delete  │ -p embed-certs-559379                                                                                                                                                                                                                         │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │ 18 Oct 25 09:34 UTC │
	│ delete  │ -p embed-certs-559379                                                                                                                                                                                                                         │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │ 18 Oct 25 09:34 UTC │
	│ start   │ -p newest-cni-250274 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-250274            │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │ 18 Oct 25 09:35 UTC │
	│ addons  │ enable metrics-server -p newest-cni-250274 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-250274            │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:34:21
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:34:21.631504 1478223 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:34:21.631712 1478223 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:34:21.631740 1478223 out.go:374] Setting ErrFile to fd 2...
	I1018 09:34:21.631762 1478223 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:34:21.634318 1478223 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 09:34:21.634845 1478223 out.go:368] Setting JSON to false
	I1018 09:34:21.635815 1478223 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":40609,"bootTime":1760739453,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1018 09:34:21.635968 1478223 start.go:141] virtualization:  
	I1018 09:34:21.639665 1478223 out.go:179] * [newest-cni-250274] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 09:34:21.643372 1478223 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 09:34:21.643419 1478223 notify.go:220] Checking for updates...
	I1018 09:34:21.646318 1478223 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:34:21.649420 1478223 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 09:34:21.652300 1478223 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-1274243/.minikube
	I1018 09:34:21.655314 1478223 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 09:34:21.658191 1478223 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:34:21.661742 1478223 config.go:182] Loaded profile config "default-k8s-diff-port-593480": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:34:21.661864 1478223 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:34:21.704116 1478223 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 09:34:21.704249 1478223 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:34:21.813284 1478223 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 09:34:21.796777633 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:34:21.813436 1478223 docker.go:318] overlay module found
	I1018 09:34:21.816867 1478223 out.go:179] * Using the docker driver based on user configuration
	I1018 09:34:21.819600 1478223 start.go:305] selected driver: docker
	I1018 09:34:21.819630 1478223 start.go:925] validating driver "docker" against <nil>
	I1018 09:34:21.819644 1478223 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:34:21.820363 1478223 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:34:21.905691 1478223 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 09:34:21.892088568 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:34:21.905846 1478223 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1018 09:34:21.905875 1478223 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1018 09:34:21.906519 1478223 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 09:34:21.909474 1478223 out.go:179] * Using Docker driver with root privileges
	I1018 09:34:21.912251 1478223 cni.go:84] Creating CNI manager for ""
	I1018 09:34:21.912328 1478223 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:34:21.912345 1478223 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 09:34:21.912449 1478223 start.go:349] cluster config:
	{Name:newest-cni-250274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-250274 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:34:21.915491 1478223 out.go:179] * Starting "newest-cni-250274" primary control-plane node in "newest-cni-250274" cluster
	I1018 09:34:21.918342 1478223 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:34:21.920591 1478223 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:34:21.923365 1478223 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:34:21.923432 1478223 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 09:34:21.923444 1478223 cache.go:58] Caching tarball of preloaded images
	I1018 09:34:21.923442 1478223 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:34:21.923526 1478223 preload.go:233] Found /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 09:34:21.923536 1478223 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 09:34:21.923642 1478223 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/config.json ...
	I1018 09:34:21.923661 1478223 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/config.json: {Name:mk31f7325abe6b18e263e0939712508f2c89b715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:34:21.944331 1478223 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:34:21.944355 1478223 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:34:21.944368 1478223 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:34:21.944390 1478223 start.go:360] acquireMachinesLock for newest-cni-250274: {Name:mk472d1fdef0a7773f022c5286349dcbff699ada Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:34:21.944496 1478223 start.go:364] duration metric: took 88.104µs to acquireMachinesLock for "newest-cni-250274"
	I1018 09:34:21.944520 1478223 start.go:93] Provisioning new machine with config: &{Name:newest-cni-250274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-250274 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:34:21.944599 1478223 start.go:125] createHost starting for "" (driver="docker")
	I1018 09:34:19.940269 1474687 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 6.356682145s
	I1018 09:34:20.509291 1474687 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 6.926631972s
	I1018 09:34:22.086406 1474687 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 8.501622582s
	I1018 09:34:22.109276 1474687 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 09:34:22.138707 1474687 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 09:34:22.159125 1474687 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 09:34:22.159370 1474687 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-593480 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 09:34:22.193648 1474687 kubeadm.go:318] [bootstrap-token] Using token: b16xxu.9slprpsvez7oeote
	I1018 09:34:22.202122 1474687 out.go:252]   - Configuring RBAC rules ...
	I1018 09:34:22.202262 1474687 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 09:34:22.208832 1474687 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 09:34:22.224143 1474687 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 09:34:22.253653 1474687 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 09:34:22.261062 1474687 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 09:34:22.264208 1474687 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 09:34:22.491137 1474687 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 09:34:22.947874 1474687 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 09:34:23.501968 1474687 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 09:34:23.503485 1474687 kubeadm.go:318] 
	I1018 09:34:23.503572 1474687 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 09:34:23.503582 1474687 kubeadm.go:318] 
	I1018 09:34:23.503663 1474687 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 09:34:23.503673 1474687 kubeadm.go:318] 
	I1018 09:34:23.503699 1474687 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 09:34:23.504123 1474687 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 09:34:23.504197 1474687 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 09:34:23.504208 1474687 kubeadm.go:318] 
	I1018 09:34:23.504265 1474687 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 09:34:23.504274 1474687 kubeadm.go:318] 
	I1018 09:34:23.504324 1474687 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 09:34:23.504333 1474687 kubeadm.go:318] 
	I1018 09:34:23.504387 1474687 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 09:34:23.504469 1474687 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 09:34:23.504547 1474687 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 09:34:23.504557 1474687 kubeadm.go:318] 
	I1018 09:34:23.504823 1474687 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 09:34:23.504911 1474687 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 09:34:23.504927 1474687 kubeadm.go:318] 
	I1018 09:34:23.505188 1474687 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token b16xxu.9slprpsvez7oeote \
	I1018 09:34:23.505305 1474687 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:96f90b4cb8c73febdc9c05bdeac126d36ffd51807995d3aad044a9ec3e33b953 \
	I1018 09:34:23.505497 1474687 kubeadm.go:318] 	--control-plane 
	I1018 09:34:23.505513 1474687 kubeadm.go:318] 
	I1018 09:34:23.505768 1474687 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 09:34:23.505783 1474687 kubeadm.go:318] 
	I1018 09:34:23.506046 1474687 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token b16xxu.9slprpsvez7oeote \
	I1018 09:34:23.508111 1474687 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:96f90b4cb8c73febdc9c05bdeac126d36ffd51807995d3aad044a9ec3e33b953 
	I1018 09:34:23.517260 1474687 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1018 09:34:23.517498 1474687 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1018 09:34:23.517612 1474687 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 09:34:23.517630 1474687 cni.go:84] Creating CNI manager for ""
	I1018 09:34:23.517637 1474687 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:34:23.520776 1474687 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 09:34:21.948230 1478223 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 09:34:21.948471 1478223 start.go:159] libmachine.API.Create for "newest-cni-250274" (driver="docker")
	I1018 09:34:21.948512 1478223 client.go:168] LocalClient.Create starting
	I1018 09:34:21.948592 1478223 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem
	I1018 09:34:21.948622 1478223 main.go:141] libmachine: Decoding PEM data...
	I1018 09:34:21.948635 1478223 main.go:141] libmachine: Parsing certificate...
	I1018 09:34:21.948683 1478223 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem
	I1018 09:34:21.948699 1478223 main.go:141] libmachine: Decoding PEM data...
	I1018 09:34:21.948709 1478223 main.go:141] libmachine: Parsing certificate...
	I1018 09:34:21.949221 1478223 cli_runner.go:164] Run: docker network inspect newest-cni-250274 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 09:34:21.967481 1478223 cli_runner.go:211] docker network inspect newest-cni-250274 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 09:34:21.967579 1478223 network_create.go:284] running [docker network inspect newest-cni-250274] to gather additional debugging logs...
	I1018 09:34:21.967596 1478223 cli_runner.go:164] Run: docker network inspect newest-cni-250274
	W1018 09:34:21.984445 1478223 cli_runner.go:211] docker network inspect newest-cni-250274 returned with exit code 1
	I1018 09:34:21.984494 1478223 network_create.go:287] error running [docker network inspect newest-cni-250274]: docker network inspect newest-cni-250274: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-250274 not found
	I1018 09:34:21.984509 1478223 network_create.go:289] output of [docker network inspect newest-cni-250274]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-250274 not found
	
	** /stderr **
	I1018 09:34:21.984609 1478223 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:34:22.001141 1478223 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-521f8f572997 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3a:7e:e5:c0:67:29} reservation:<nil>}
	I1018 09:34:22.001861 1478223 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0b81e76c4e4f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ae:bf:e8:f1:22:c8} reservation:<nil>}
	I1018 09:34:22.002236 1478223 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-41e3e621447e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:62:fc:17:ff:cd:8c} reservation:<nil>}
	I1018 09:34:22.002728 1478223 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019fa2f0}
	I1018 09:34:22.002752 1478223 network_create.go:124] attempt to create docker network newest-cni-250274 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1018 09:34:22.002823 1478223 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-250274 newest-cni-250274
	I1018 09:34:22.070007 1478223 network_create.go:108] docker network newest-cni-250274 192.168.76.0/24 created
	I1018 09:34:22.070043 1478223 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-250274" container
	I1018 09:34:22.070122 1478223 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 09:34:22.104269 1478223 cli_runner.go:164] Run: docker volume create newest-cni-250274 --label name.minikube.sigs.k8s.io=newest-cni-250274 --label created_by.minikube.sigs.k8s.io=true
	I1018 09:34:22.136237 1478223 oci.go:103] Successfully created a docker volume newest-cni-250274
	I1018 09:34:22.136324 1478223 cli_runner.go:164] Run: docker run --rm --name newest-cni-250274-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-250274 --entrypoint /usr/bin/test -v newest-cni-250274:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 09:34:22.807685 1478223 oci.go:107] Successfully prepared a docker volume newest-cni-250274
	I1018 09:34:22.807751 1478223 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:34:22.807774 1478223 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 09:34:22.807970 1478223 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-250274:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 09:34:23.523788 1474687 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 09:34:23.527971 1474687 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 09:34:23.527990 1474687 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 09:34:23.544672 1474687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 09:34:24.002763 1474687 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 09:34:24.002914 1474687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:34:24.002998 1474687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-593480 minikube.k8s.io/updated_at=2025_10_18T09_34_24_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820 minikube.k8s.io/name=default-k8s-diff-port-593480 minikube.k8s.io/primary=true
	I1018 09:34:24.340279 1474687 ops.go:34] apiserver oom_adj: -16
	I1018 09:34:24.340381 1474687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:34:24.840695 1474687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:34:25.340693 1474687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:34:25.840689 1474687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:34:26.340510 1474687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:34:26.841227 1474687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:34:27.340536 1474687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:34:27.841141 1474687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:34:28.341276 1474687 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:34:28.652730 1474687 kubeadm.go:1113] duration metric: took 4.649873497s to wait for elevateKubeSystemPrivileges
	I1018 09:34:28.652766 1474687 kubeadm.go:402] duration metric: took 24.075972086s to StartCluster
	I1018 09:34:28.652782 1474687 settings.go:142] acquiring lock: {Name:mk4240955247baece9b9f2d9a5a8e189cb089184 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:34:28.652842 1474687 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 09:34:28.653492 1474687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/kubeconfig: {Name:mk4be33efdd3c492a070116d2c0b6f8b234870aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:34:28.653715 1474687 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:34:28.653810 1474687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 09:34:28.654061 1474687 config.go:182] Loaded profile config "default-k8s-diff-port-593480": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:34:28.654110 1474687 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:34:28.654203 1474687 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-593480"
	I1018 09:34:28.654217 1474687 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-593480"
	I1018 09:34:28.654242 1474687 host.go:66] Checking if "default-k8s-diff-port-593480" exists ...
	I1018 09:34:28.654738 1474687 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-593480 --format={{.State.Status}}
	I1018 09:34:28.655303 1474687 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-593480"
	I1018 09:34:28.655335 1474687 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-593480"
	I1018 09:34:28.655599 1474687 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-593480 --format={{.State.Status}}
	I1018 09:34:28.658144 1474687 out.go:179] * Verifying Kubernetes components...
	I1018 09:34:28.666921 1474687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:34:28.697238 1474687 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-593480"
	I1018 09:34:28.697280 1474687 host.go:66] Checking if "default-k8s-diff-port-593480" exists ...
	I1018 09:34:28.697707 1474687 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-593480 --format={{.State.Status}}
	I1018 09:34:28.700739 1474687 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:34:28.704007 1474687 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:34:28.704027 1474687 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:34:28.704083 1474687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-593480
	I1018 09:34:28.729034 1474687 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:34:28.729056 1474687 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:34:28.729120 1474687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-593480
	I1018 09:34:28.764933 1474687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34901 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/default-k8s-diff-port-593480/id_rsa Username:docker}
	I1018 09:34:28.765365 1474687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34901 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/default-k8s-diff-port-593480/id_rsa Username:docker}
	I1018 09:34:29.203001 1474687 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:34:29.208687 1474687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 09:34:29.208874 1474687 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:34:29.213370 1474687 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:34:29.923326 1474687 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1018 09:34:29.926430 1474687 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-593480" to be "Ready" ...
	I1018 09:34:29.967792 1474687 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1018 09:34:27.416287 1478223 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-250274:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.608273706s)
	I1018 09:34:27.416322 1478223 kic.go:203] duration metric: took 4.608544452s to extract preloaded images to volume ...
	W1018 09:34:27.416453 1478223 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 09:34:27.416575 1478223 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 09:34:27.481547 1478223 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-250274 --name newest-cni-250274 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-250274 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-250274 --network newest-cni-250274 --ip 192.168.76.2 --volume newest-cni-250274:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 09:34:27.797188 1478223 cli_runner.go:164] Run: docker container inspect newest-cni-250274 --format={{.State.Running}}
	I1018 09:34:27.822544 1478223 cli_runner.go:164] Run: docker container inspect newest-cni-250274 --format={{.State.Status}}
	I1018 09:34:27.845811 1478223 cli_runner.go:164] Run: docker exec newest-cni-250274 stat /var/lib/dpkg/alternatives/iptables
	I1018 09:34:27.923411 1478223 oci.go:144] the created container "newest-cni-250274" has a running status.
	I1018 09:34:27.923442 1478223 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/newest-cni-250274/id_rsa...
	I1018 09:34:28.122760 1478223 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/newest-cni-250274/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 09:34:28.151391 1478223 cli_runner.go:164] Run: docker container inspect newest-cni-250274 --format={{.State.Status}}
	I1018 09:34:28.174681 1478223 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 09:34:28.174699 1478223 kic_runner.go:114] Args: [docker exec --privileged newest-cni-250274 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 09:34:28.235169 1478223 cli_runner.go:164] Run: docker container inspect newest-cni-250274 --format={{.State.Status}}
	I1018 09:34:28.260754 1478223 machine.go:93] provisionDockerMachine start ...
	I1018 09:34:28.260855 1478223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250274
	I1018 09:34:28.287976 1478223 main.go:141] libmachine: Using SSH client type: native
	I1018 09:34:28.288304 1478223 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34906 <nil> <nil>}
	I1018 09:34:28.288314 1478223 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:34:28.288986 1478223 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 09:34:31.443458 1478223 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-250274
	
	I1018 09:34:31.443481 1478223 ubuntu.go:182] provisioning hostname "newest-cni-250274"
	I1018 09:34:31.443556 1478223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250274
	I1018 09:34:31.460013 1478223 main.go:141] libmachine: Using SSH client type: native
	I1018 09:34:31.460323 1478223 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34906 <nil> <nil>}
	I1018 09:34:31.460339 1478223 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-250274 && echo "newest-cni-250274" | sudo tee /etc/hostname
	I1018 09:34:31.616957 1478223 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-250274
	
	I1018 09:34:31.617106 1478223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250274
	I1018 09:34:29.970700 1474687 addons.go:514] duration metric: took 1.316576551s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 09:34:30.428221 1474687 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-593480" context rescaled to 1 replicas
	W1018 09:34:31.930021 1474687 node_ready.go:57] node "default-k8s-diff-port-593480" has "Ready":"False" status (will retry)
	I1018 09:34:31.634836 1478223 main.go:141] libmachine: Using SSH client type: native
	I1018 09:34:31.635254 1478223 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34906 <nil> <nil>}
	I1018 09:34:31.635275 1478223 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-250274' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-250274/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-250274' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:34:31.784096 1478223 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:34:31.784121 1478223 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-1274243/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-1274243/.minikube}
	I1018 09:34:31.784146 1478223 ubuntu.go:190] setting up certificates
	I1018 09:34:31.784159 1478223 provision.go:84] configureAuth start
	I1018 09:34:31.784232 1478223 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-250274
	I1018 09:34:31.801180 1478223 provision.go:143] copyHostCerts
	I1018 09:34:31.801248 1478223 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem, removing ...
	I1018 09:34:31.801261 1478223 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem
	I1018 09:34:31.801341 1478223 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem (1078 bytes)
	I1018 09:34:31.801444 1478223 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem, removing ...
	I1018 09:34:31.801455 1478223 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem
	I1018 09:34:31.801482 1478223 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem (1123 bytes)
	I1018 09:34:31.801553 1478223 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem, removing ...
	I1018 09:34:31.801562 1478223 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem
	I1018 09:34:31.801587 1478223 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem (1675 bytes)
	I1018 09:34:31.801646 1478223 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem org=jenkins.newest-cni-250274 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-250274]
	I1018 09:34:32.063001 1478223 provision.go:177] copyRemoteCerts
	I1018 09:34:32.063096 1478223 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:34:32.063163 1478223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250274
	I1018 09:34:32.084693 1478223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34906 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/newest-cni-250274/id_rsa Username:docker}
	I1018 09:34:32.191969 1478223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:34:32.211393 1478223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 09:34:32.229968 1478223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:34:32.247771 1478223 provision.go:87] duration metric: took 463.584758ms to configureAuth
	I1018 09:34:32.247799 1478223 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:34:32.248043 1478223 config.go:182] Loaded profile config "newest-cni-250274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:34:32.248149 1478223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250274
	I1018 09:34:32.264971 1478223 main.go:141] libmachine: Using SSH client type: native
	I1018 09:34:32.265293 1478223 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34906 <nil> <nil>}
	I1018 09:34:32.265312 1478223 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:34:32.526833 1478223 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:34:32.526858 1478223 machine.go:96] duration metric: took 4.266085298s to provisionDockerMachine
	I1018 09:34:32.526869 1478223 client.go:171] duration metric: took 10.578349465s to LocalClient.Create
	I1018 09:34:32.526887 1478223 start.go:167] duration metric: took 10.578417811s to libmachine.API.Create "newest-cni-250274"
	I1018 09:34:32.526899 1478223 start.go:293] postStartSetup for "newest-cni-250274" (driver="docker")
	I1018 09:34:32.526918 1478223 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:34:32.526989 1478223 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:34:32.527044 1478223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250274
	I1018 09:34:32.544663 1478223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34906 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/newest-cni-250274/id_rsa Username:docker}
	I1018 09:34:32.656418 1478223 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:34:32.659947 1478223 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:34:32.659972 1478223 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:34:32.659983 1478223 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-1274243/.minikube/addons for local assets ...
	I1018 09:34:32.660038 1478223 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-1274243/.minikube/files for local assets ...
	I1018 09:34:32.660121 1478223 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem -> 12760972.pem in /etc/ssl/certs
	I1018 09:34:32.660224 1478223 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:34:32.667973 1478223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem --> /etc/ssl/certs/12760972.pem (1708 bytes)
	I1018 09:34:32.686109 1478223 start.go:296] duration metric: took 159.186432ms for postStartSetup
	I1018 09:34:32.686513 1478223 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-250274
	I1018 09:34:32.702701 1478223 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/config.json ...
	I1018 09:34:32.702974 1478223 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:34:32.703021 1478223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250274
	I1018 09:34:32.721335 1478223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34906 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/newest-cni-250274/id_rsa Username:docker}
	I1018 09:34:32.820875 1478223 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:34:32.825347 1478223 start.go:128] duration metric: took 10.88073371s to createHost
	I1018 09:34:32.825370 1478223 start.go:83] releasing machines lock for "newest-cni-250274", held for 10.880866408s
	I1018 09:34:32.825443 1478223 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-250274
	I1018 09:34:32.842395 1478223 ssh_runner.go:195] Run: cat /version.json
	I1018 09:34:32.842424 1478223 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:34:32.842450 1478223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250274
	I1018 09:34:32.842513 1478223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250274
	I1018 09:34:32.866726 1478223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34906 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/newest-cni-250274/id_rsa Username:docker}
	I1018 09:34:32.869528 1478223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34906 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/newest-cni-250274/id_rsa Username:docker}
	I1018 09:34:32.967359 1478223 ssh_runner.go:195] Run: systemctl --version
	I1018 09:34:33.064669 1478223 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:34:33.101318 1478223 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:34:33.105512 1478223 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:34:33.105579 1478223 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:34:33.134723 1478223 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 09:34:33.134790 1478223 start.go:495] detecting cgroup driver to use...
	I1018 09:34:33.134837 1478223 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 09:34:33.134900 1478223 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:34:33.153453 1478223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:34:33.166658 1478223 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:34:33.166745 1478223 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:34:33.185111 1478223 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:34:33.205224 1478223 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:34:33.334173 1478223 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:34:33.455754 1478223 docker.go:234] disabling docker service ...
	I1018 09:34:33.455830 1478223 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:34:33.478684 1478223 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:34:33.492936 1478223 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:34:33.621135 1478223 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:34:33.745471 1478223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:34:33.758602 1478223 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:34:33.772508 1478223 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:34:33.772600 1478223 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:34:33.781776 1478223 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 09:34:33.781881 1478223 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:34:33.790984 1478223 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:34:33.799581 1478223 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:34:33.808308 1478223 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:34:33.816194 1478223 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:34:33.825803 1478223 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:34:33.839979 1478223 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:34:33.849012 1478223 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:34:33.856571 1478223 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:34:33.864002 1478223 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:34:33.993779 1478223 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:34:34.130262 1478223 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:34:34.130361 1478223 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:34:34.134265 1478223 start.go:563] Will wait 60s for crictl version
	I1018 09:34:34.134354 1478223 ssh_runner.go:195] Run: which crictl
	I1018 09:34:34.137923 1478223 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:34:34.163650 1478223 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:34:34.163783 1478223 ssh_runner.go:195] Run: crio --version
	I1018 09:34:34.192807 1478223 ssh_runner.go:195] Run: crio --version
	I1018 09:34:34.226870 1478223 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:34:34.230079 1478223 cli_runner.go:164] Run: docker network inspect newest-cni-250274 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:34:34.247068 1478223 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 09:34:34.258679 1478223 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:34:34.271721 1478223 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1018 09:34:34.274501 1478223 kubeadm.go:883] updating cluster {Name:newest-cni-250274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-250274 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:34:34.274641 1478223 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:34:34.274720 1478223 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:34:34.324037 1478223 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:34:34.324064 1478223 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:34:34.324123 1478223 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:34:34.354098 1478223 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:34:34.354119 1478223 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:34:34.354131 1478223 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1018 09:34:34.354224 1478223 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-250274 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-250274 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:34:34.354305 1478223 ssh_runner.go:195] Run: crio config
	I1018 09:34:34.408509 1478223 cni.go:84] Creating CNI manager for ""
	I1018 09:34:34.408535 1478223 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:34:34.408560 1478223 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1018 09:34:34.408592 1478223 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-250274 NodeName:newest-cni-250274 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:34:34.408784 1478223 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-250274"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:34:34.408867 1478223 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:34:34.421814 1478223 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:34:34.421982 1478223 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:34:34.431677 1478223 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1018 09:34:34.446122 1478223 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:34:34.460231 1478223 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1018 09:34:34.473534 1478223 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:34:34.477582 1478223 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:34:34.487373 1478223 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:34:34.612360 1478223 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:34:34.627566 1478223 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274 for IP: 192.168.76.2
	I1018 09:34:34.627635 1478223 certs.go:195] generating shared ca certs ...
	I1018 09:34:34.627667 1478223 certs.go:227] acquiring lock for ca certs: {Name:mk38b7543cc5a8f0209b20a590824c94ad8d0609 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:34:34.627884 1478223 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.key
	I1018 09:34:34.627974 1478223 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.key
	I1018 09:34:34.628002 1478223 certs.go:257] generating profile certs ...
	I1018 09:34:34.628090 1478223 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/client.key
	I1018 09:34:34.628136 1478223 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/client.crt with IP's: []
	I1018 09:34:35.875257 1478223 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/client.crt ...
	I1018 09:34:35.875289 1478223 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/client.crt: {Name:mk66360739905533ad14dd894145247656663b39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:34:35.875479 1478223 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/client.key ...
	I1018 09:34:35.875492 1478223 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/client.key: {Name:mk7a48b8cf175ea70b8d2575d4c61e2906818b9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:34:35.875585 1478223 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/apiserver.key.08fa8726
	I1018 09:34:35.875604 1478223 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/apiserver.crt.08fa8726 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1018 09:34:36.033745 1478223 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/apiserver.crt.08fa8726 ...
	I1018 09:34:36.033776 1478223 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/apiserver.crt.08fa8726: {Name:mk203e1b7d25ec0088bb2180fb0784f5f991384e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:34:36.033995 1478223 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/apiserver.key.08fa8726 ...
	I1018 09:34:36.034014 1478223 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/apiserver.key.08fa8726: {Name:mk630d1626edfc55590d6a88c5080122556614e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:34:36.034119 1478223 certs.go:382] copying /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/apiserver.crt.08fa8726 -> /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/apiserver.crt
	I1018 09:34:36.034216 1478223 certs.go:386] copying /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/apiserver.key.08fa8726 -> /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/apiserver.key
	I1018 09:34:36.034285 1478223 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/proxy-client.key
	I1018 09:34:36.034309 1478223 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/proxy-client.crt with IP's: []
	I1018 09:34:36.761706 1478223 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/proxy-client.crt ...
	I1018 09:34:36.761737 1478223 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/proxy-client.crt: {Name:mk95c2d788b2728e139644a7e0e7f04872cff8eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:34:36.761941 1478223 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/proxy-client.key ...
	I1018 09:34:36.761953 1478223 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/proxy-client.key: {Name:mk47349a695eb60f52af9d36bcf70ec2b3856708 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:34:36.762131 1478223 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097.pem (1338 bytes)
	W1018 09:34:36.762176 1478223 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097_empty.pem, impossibly tiny 0 bytes
	I1018 09:34:36.762189 1478223 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:34:36.762214 1478223 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:34:36.762246 1478223 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:34:36.762270 1478223 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem (1675 bytes)
	I1018 09:34:36.762319 1478223 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem (1708 bytes)
	I1018 09:34:36.762864 1478223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:34:36.780423 1478223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 09:34:36.799086 1478223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:34:36.818820 1478223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:34:36.836566 1478223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 09:34:36.855622 1478223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 09:34:36.874664 1478223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:34:36.895775 1478223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 09:34:36.915669 1478223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097.pem --> /usr/share/ca-certificates/1276097.pem (1338 bytes)
	I1018 09:34:36.933839 1478223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem --> /usr/share/ca-certificates/12760972.pem (1708 bytes)
	I1018 09:34:36.951616 1478223 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:34:36.973224 1478223 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:34:36.986500 1478223 ssh_runner.go:195] Run: openssl version
	I1018 09:34:36.992931 1478223 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1276097.pem && ln -fs /usr/share/ca-certificates/1276097.pem /etc/ssl/certs/1276097.pem"
	I1018 09:34:37.001120 1478223 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1276097.pem
	I1018 09:34:37.013669 1478223 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 08:36 /usr/share/ca-certificates/1276097.pem
	I1018 09:34:37.013741 1478223 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1276097.pem
	I1018 09:34:37.058895 1478223 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1276097.pem /etc/ssl/certs/51391683.0"
	I1018 09:34:37.067321 1478223 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12760972.pem && ln -fs /usr/share/ca-certificates/12760972.pem /etc/ssl/certs/12760972.pem"
	I1018 09:34:37.075679 1478223 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12760972.pem
	I1018 09:34:37.079602 1478223 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 08:36 /usr/share/ca-certificates/12760972.pem
	I1018 09:34:37.079683 1478223 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12760972.pem
	I1018 09:34:37.122338 1478223 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12760972.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:34:37.130548 1478223 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:34:37.138509 1478223 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:34:37.142434 1478223 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:34:37.142497 1478223 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:34:37.183365 1478223 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:34:37.191816 1478223 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:34:37.195137 1478223 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 09:34:37.195237 1478223 kubeadm.go:400] StartCluster: {Name:newest-cni-250274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-250274 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:34:37.195328 1478223 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:34:37.195382 1478223 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:34:37.224470 1478223 cri.go:89] found id: ""
	I1018 09:34:37.224561 1478223 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:34:37.232294 1478223 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 09:34:37.239821 1478223 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 09:34:37.239929 1478223 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 09:34:37.247900 1478223 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 09:34:37.247970 1478223 kubeadm.go:157] found existing configuration files:
	
	I1018 09:34:37.248052 1478223 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 09:34:37.255640 1478223 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 09:34:37.255722 1478223 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 09:34:37.265338 1478223 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 09:34:37.274109 1478223 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 09:34:37.274172 1478223 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 09:34:37.281373 1478223 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 09:34:37.289326 1478223 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 09:34:37.289411 1478223 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 09:34:37.297097 1478223 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 09:34:37.304463 1478223 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 09:34:37.304531 1478223 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 09:34:37.315139 1478223 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 09:34:37.361216 1478223 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 09:34:37.361436 1478223 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 09:34:37.385456 1478223 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 09:34:37.385543 1478223 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 09:34:37.385611 1478223 kubeadm.go:318] OS: Linux
	I1018 09:34:37.385669 1478223 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 09:34:37.385731 1478223 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1018 09:34:37.385802 1478223 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 09:34:37.385862 1478223 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 09:34:37.385921 1478223 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 09:34:37.385974 1478223 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 09:34:37.386028 1478223 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 09:34:37.386089 1478223 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 09:34:37.386149 1478223 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1018 09:34:37.463623 1478223 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 09:34:37.463744 1478223 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 09:34:37.463883 1478223 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 09:34:37.472381 1478223 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1018 09:34:33.930204 1474687 node_ready.go:57] node "default-k8s-diff-port-593480" has "Ready":"False" status (will retry)
	W1018 09:34:35.930831 1474687 node_ready.go:57] node "default-k8s-diff-port-593480" has "Ready":"False" status (will retry)
	W1018 09:34:38.430443 1474687 node_ready.go:57] node "default-k8s-diff-port-593480" has "Ready":"False" status (will retry)
	I1018 09:34:37.478277 1478223 out.go:252]   - Generating certificates and keys ...
	I1018 09:34:37.478444 1478223 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 09:34:37.478566 1478223 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 09:34:38.684429 1478223 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 09:34:38.872833 1478223 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 09:34:39.428314 1478223 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 09:34:40.810585 1478223 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 09:34:41.041109 1478223 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 09:34:41.041584 1478223 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-250274] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1018 09:34:41.251913 1478223 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 09:34:41.252087 1478223 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-250274] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1018 09:34:41.383144 1478223 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	W1018 09:34:40.430829 1474687 node_ready.go:57] node "default-k8s-diff-port-593480" has "Ready":"False" status (will retry)
	W1018 09:34:42.930479 1474687 node_ready.go:57] node "default-k8s-diff-port-593480" has "Ready":"False" status (will retry)
	I1018 09:34:43.078399 1478223 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 09:34:43.188705 1478223 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 09:34:43.188957 1478223 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 09:34:43.690152 1478223 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 09:34:43.943520 1478223 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 09:34:45.175882 1478223 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 09:34:46.600330 1478223 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 09:34:46.679104 1478223 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 09:34:46.679918 1478223 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 09:34:46.682634 1478223 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1018 09:34:45.431307 1474687 node_ready.go:57] node "default-k8s-diff-port-593480" has "Ready":"False" status (will retry)
	W1018 09:34:47.930464 1474687 node_ready.go:57] node "default-k8s-diff-port-593480" has "Ready":"False" status (will retry)
	I1018 09:34:46.685878 1478223 out.go:252]   - Booting up control plane ...
	I1018 09:34:46.685988 1478223 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 09:34:46.686070 1478223 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 09:34:46.686693 1478223 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 09:34:46.707778 1478223 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 09:34:46.707919 1478223 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 09:34:46.715283 1478223 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 09:34:46.715679 1478223 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 09:34:46.715999 1478223 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 09:34:46.852401 1478223 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 09:34:46.852526 1478223 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 09:34:47.862019 1478223 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.011477113s
	I1018 09:34:47.865792 1478223 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 09:34:47.865890 1478223 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1018 09:34:47.865983 1478223 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 09:34:47.866065 1478223 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 09:34:51.491612 1478223 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.625214543s
	W1018 09:34:50.430254 1474687 node_ready.go:57] node "default-k8s-diff-port-593480" has "Ready":"False" status (will retry)
	W1018 09:34:52.930069 1474687 node_ready.go:57] node "default-k8s-diff-port-593480" has "Ready":"False" status (will retry)
	I1018 09:34:54.447000 1478223 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 6.581152419s
	I1018 09:34:54.867967 1478223 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.001996763s
	I1018 09:34:54.891701 1478223 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 09:34:54.910384 1478223 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 09:34:54.927435 1478223 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 09:34:54.927645 1478223 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-250274 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 09:34:54.943955 1478223 kubeadm.go:318] [bootstrap-token] Using token: cfego7.9b0hnllziac6hp5h
	I1018 09:34:54.947067 1478223 out.go:252]   - Configuring RBAC rules ...
	I1018 09:34:54.947201 1478223 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 09:34:54.951957 1478223 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 09:34:54.961154 1478223 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 09:34:54.969013 1478223 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 09:34:54.973670 1478223 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 09:34:54.977767 1478223 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 09:34:55.275466 1478223 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 09:34:55.705614 1478223 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 09:34:56.275369 1478223 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 09:34:56.276554 1478223 kubeadm.go:318] 
	I1018 09:34:56.276650 1478223 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 09:34:56.276662 1478223 kubeadm.go:318] 
	I1018 09:34:56.276745 1478223 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 09:34:56.276754 1478223 kubeadm.go:318] 
	I1018 09:34:56.276780 1478223 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 09:34:56.276845 1478223 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 09:34:56.276903 1478223 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 09:34:56.276912 1478223 kubeadm.go:318] 
	I1018 09:34:56.276974 1478223 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 09:34:56.276983 1478223 kubeadm.go:318] 
	I1018 09:34:56.277032 1478223 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 09:34:56.277040 1478223 kubeadm.go:318] 
	I1018 09:34:56.277094 1478223 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 09:34:56.277178 1478223 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 09:34:56.277253 1478223 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 09:34:56.277261 1478223 kubeadm.go:318] 
	I1018 09:34:56.277350 1478223 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 09:34:56.277433 1478223 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 09:34:56.277441 1478223 kubeadm.go:318] 
	I1018 09:34:56.277540 1478223 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token cfego7.9b0hnllziac6hp5h \
	I1018 09:34:56.277655 1478223 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:96f90b4cb8c73febdc9c05bdeac126d36ffd51807995d3aad044a9ec3e33b953 \
	I1018 09:34:56.277680 1478223 kubeadm.go:318] 	--control-plane 
	I1018 09:34:56.277687 1478223 kubeadm.go:318] 
	I1018 09:34:56.277776 1478223 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 09:34:56.277784 1478223 kubeadm.go:318] 
	I1018 09:34:56.277870 1478223 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token cfego7.9b0hnllziac6hp5h \
	I1018 09:34:56.277983 1478223 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:96f90b4cb8c73febdc9c05bdeac126d36ffd51807995d3aad044a9ec3e33b953 
	I1018 09:34:56.282424 1478223 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1018 09:34:56.282661 1478223 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1018 09:34:56.282782 1478223 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 09:34:56.282802 1478223 cni.go:84] Creating CNI manager for ""
	I1018 09:34:56.282811 1478223 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:34:56.285936 1478223 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 09:34:56.288761 1478223 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 09:34:56.292864 1478223 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 09:34:56.292883 1478223 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 09:34:56.306979 1478223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 09:34:56.611359 1478223 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 09:34:56.611488 1478223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:34:56.611560 1478223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-250274 minikube.k8s.io/updated_at=2025_10_18T09_34_56_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820 minikube.k8s.io/name=newest-cni-250274 minikube.k8s.io/primary=true
	W1018 09:34:55.429935 1474687 node_ready.go:57] node "default-k8s-diff-port-593480" has "Ready":"False" status (will retry)
	W1018 09:34:57.929868 1474687 node_ready.go:57] node "default-k8s-diff-port-593480" has "Ready":"False" status (will retry)
	I1018 09:34:56.773073 1478223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:34:56.773129 1478223 ops.go:34] apiserver oom_adj: -16
	I1018 09:34:57.273835 1478223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:34:57.774056 1478223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:34:58.273182 1478223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:34:58.773190 1478223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:34:59.273666 1478223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:34:59.773759 1478223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:35:00.273698 1478223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:35:00.773586 1478223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:35:01.273641 1478223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:35:01.773487 1478223 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:35:01.889986 1478223 kubeadm.go:1113] duration metric: took 5.278542206s to wait for elevateKubeSystemPrivileges
	I1018 09:35:01.890029 1478223 kubeadm.go:402] duration metric: took 24.694795687s to StartCluster
	I1018 09:35:01.890050 1478223 settings.go:142] acquiring lock: {Name:mk4240955247baece9b9f2d9a5a8e189cb089184 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:35:01.890124 1478223 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 09:35:01.891150 1478223 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/kubeconfig: {Name:mk4be33efdd3c492a070116d2c0b6f8b234870aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:35:01.891391 1478223 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:35:01.891500 1478223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 09:35:01.891772 1478223 config.go:182] Loaded profile config "newest-cni-250274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:35:01.891806 1478223 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:35:01.891928 1478223 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-250274"
	I1018 09:35:01.891949 1478223 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-250274"
	I1018 09:35:01.891975 1478223 host.go:66] Checking if "newest-cni-250274" exists ...
	I1018 09:35:01.892264 1478223 addons.go:69] Setting default-storageclass=true in profile "newest-cni-250274"
	I1018 09:35:01.892287 1478223 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-250274"
	I1018 09:35:01.892487 1478223 cli_runner.go:164] Run: docker container inspect newest-cni-250274 --format={{.State.Status}}
	I1018 09:35:01.892634 1478223 cli_runner.go:164] Run: docker container inspect newest-cni-250274 --format={{.State.Status}}
	I1018 09:35:01.896535 1478223 out.go:179] * Verifying Kubernetes components...
	I1018 09:35:01.900746 1478223 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:35:01.933325 1478223 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:35:01.937871 1478223 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:35:01.937903 1478223 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:35:01.937970 1478223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250274
	I1018 09:35:01.939911 1478223 addons.go:238] Setting addon default-storageclass=true in "newest-cni-250274"
	I1018 09:35:01.939955 1478223 host.go:66] Checking if "newest-cni-250274" exists ...
	I1018 09:35:01.940394 1478223 cli_runner.go:164] Run: docker container inspect newest-cni-250274 --format={{.State.Status}}
	I1018 09:35:01.985519 1478223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34906 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/newest-cni-250274/id_rsa Username:docker}
	I1018 09:35:01.990724 1478223 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:35:01.990746 1478223 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:35:01.990817 1478223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250274
	I1018 09:35:02.025617 1478223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34906 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/newest-cni-250274/id_rsa Username:docker}
	I1018 09:35:02.226863 1478223 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 09:35:02.271468 1478223 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:35:02.379079 1478223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:35:02.419023 1478223 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:35:02.802256 1478223 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:35:02.802319 1478223 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:35:02.802535 1478223 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1018 09:35:03.152696 1478223 api_server.go:72] duration metric: took 1.261275998s to wait for apiserver process to appear ...
	I1018 09:35:03.152718 1478223 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:35:03.152736 1478223 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:35:03.162742 1478223 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 09:35:03.164160 1478223 api_server.go:141] control plane version: v1.34.1
	I1018 09:35:03.164201 1478223 api_server.go:131] duration metric: took 11.476045ms to wait for apiserver health ...
	I1018 09:35:03.164211 1478223 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:35:03.179334 1478223 system_pods.go:59] 8 kube-system pods found
	I1018 09:35:03.179384 1478223 system_pods.go:61] "coredns-66bc5c9577-g7kfg" [38b7f130-b2b9-48a2-93bd-ad4c13e911cb] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 09:35:03.179394 1478223 system_pods.go:61] "etcd-newest-cni-250274" [b856dfe7-8c88-4774-9e86-2b971cf7e5f8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:35:03.179405 1478223 system_pods.go:61] "kindnet-p4pv8" [7a400bc4-76f3-4503-b82a-52b0cabbb2a3] Running
	I1018 09:35:03.179410 1478223 system_pods.go:61] "kube-apiserver-newest-cni-250274" [2b020b61-a478-4fd1-9bd8-ae42ae1ab60e] Running
	I1018 09:35:03.179429 1478223 system_pods.go:61] "kube-controller-manager-newest-cni-250274" [54fb4f01-f3c6-4b86-a2e4-48e6656c751e] Running
	I1018 09:35:03.179437 1478223 system_pods.go:61] "kube-proxy-w56ln" [84d08ca5-9902-4380-bd4e-2aac486b22e6] Running
	I1018 09:35:03.179441 1478223 system_pods.go:61] "kube-scheduler-newest-cni-250274" [51b1c6fd-638b-47fa-9f59-e24e2ec914f6] Running
	I1018 09:35:03.179447 1478223 system_pods.go:61] "storage-provisioner" [8a360733-56ab-4bc7-ae00-5f7b4d528d8d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 09:35:03.179455 1478223 system_pods.go:74] duration metric: took 15.238418ms to wait for pod list to return data ...
	I1018 09:35:03.179464 1478223 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:35:03.181794 1478223 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1018 09:35:03.183901 1478223 default_sa.go:45] found service account: "default"
	I1018 09:35:03.183976 1478223 default_sa.go:55] duration metric: took 4.487637ms for default service account to be created ...
	I1018 09:35:03.184042 1478223 kubeadm.go:586] duration metric: took 1.292585804s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 09:35:03.184087 1478223 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:35:03.185385 1478223 addons.go:514] duration metric: took 1.293571771s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 09:35:03.190259 1478223 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 09:35:03.190293 1478223 node_conditions.go:123] node cpu capacity is 2
	I1018 09:35:03.190306 1478223 node_conditions.go:105] duration metric: took 6.200041ms to run NodePressure ...
	I1018 09:35:03.190319 1478223 start.go:241] waiting for startup goroutines ...
	I1018 09:35:03.306847 1478223 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-250274" context rescaled to 1 replicas
	I1018 09:35:03.306919 1478223 start.go:246] waiting for cluster config update ...
	I1018 09:35:03.306939 1478223 start.go:255] writing updated cluster config ...
	I1018 09:35:03.307251 1478223 ssh_runner.go:195] Run: rm -f paused
	I1018 09:35:03.377551 1478223 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 09:35:03.381245 1478223 out.go:179] * Done! kubectl is now configured to use "newest-cni-250274" cluster and "default" namespace by default
	W1018 09:34:59.930411 1474687 node_ready.go:57] node "default-k8s-diff-port-593480" has "Ready":"False" status (will retry)
	W1018 09:35:01.933231 1474687 node_ready.go:57] node "default-k8s-diff-port-593480" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 18 09:35:02 newest-cni-250274 crio[835]: time="2025-10-18T09:35:02.280512512Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:35:02 newest-cni-250274 crio[835]: time="2025-10-18T09:35:02.290419051Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=c8a672cf-ed36-4308-9e25-2a9d0eb90e14 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:35:02 newest-cni-250274 crio[835]: time="2025-10-18T09:35:02.291525514Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-w56ln/POD" id=a58387d9-5c7a-4919-b6f2-35fff151e6a3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:35:02 newest-cni-250274 crio[835]: time="2025-10-18T09:35:02.291703478Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:35:02 newest-cni-250274 crio[835]: time="2025-10-18T09:35:02.302799727Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=a58387d9-5c7a-4919-b6f2-35fff151e6a3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:35:02 newest-cni-250274 crio[835]: time="2025-10-18T09:35:02.306716426Z" level=info msg="Ran pod sandbox d34732fe855ca58b09e6b7d016e61073aaf5d7d10d65b6cd19763b1d0672d3b5 with infra container: kube-system/kindnet-p4pv8/POD" id=c8a672cf-ed36-4308-9e25-2a9d0eb90e14 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:35:02 newest-cni-250274 crio[835]: time="2025-10-18T09:35:02.310273915Z" level=info msg="Ran pod sandbox 31206831cef5259fa1e88fbe3b142b2455e33a937e534cdf2ac20183eb491f9c with infra container: kube-system/kube-proxy-w56ln/POD" id=a58387d9-5c7a-4919-b6f2-35fff151e6a3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:35:02 newest-cni-250274 crio[835]: time="2025-10-18T09:35:02.319621699Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=7c8c6ccf-2972-4a35-87e1-ef5e7672e1b5 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:35:02 newest-cni-250274 crio[835]: time="2025-10-18T09:35:02.319952168Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=0c07f4c2-60e1-4b77-a617-b6d70fdcb4b5 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:35:02 newest-cni-250274 crio[835]: time="2025-10-18T09:35:02.324217641Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=6b46eba5-0c6d-4a1e-8394-7dbc5fddf8d1 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:35:02 newest-cni-250274 crio[835]: time="2025-10-18T09:35:02.324484891Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=63641492-0461-4293-bc97-9199d741ca52 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:35:02 newest-cni-250274 crio[835]: time="2025-10-18T09:35:02.330665535Z" level=info msg="Creating container: kube-system/kindnet-p4pv8/kindnet-cni" id=653f1fe3-910c-47c0-85a8-b22d0016290a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:35:02 newest-cni-250274 crio[835]: time="2025-10-18T09:35:02.330941318Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:35:02 newest-cni-250274 crio[835]: time="2025-10-18T09:35:02.336491506Z" level=info msg="Creating container: kube-system/kube-proxy-w56ln/kube-proxy" id=70df2113-fe24-433c-a653-392ac21bc475 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:35:02 newest-cni-250274 crio[835]: time="2025-10-18T09:35:02.338063015Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:35:02 newest-cni-250274 crio[835]: time="2025-10-18T09:35:02.339466389Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:35:02 newest-cni-250274 crio[835]: time="2025-10-18T09:35:02.342918059Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:35:02 newest-cni-250274 crio[835]: time="2025-10-18T09:35:02.347871849Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:35:02 newest-cni-250274 crio[835]: time="2025-10-18T09:35:02.353541607Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:35:02 newest-cni-250274 crio[835]: time="2025-10-18T09:35:02.385049299Z" level=info msg="Created container 8292e1538c20189ba38754acb8bfe3916bd9c98688febf5864efd566bc2c138d: kube-system/kube-proxy-w56ln/kube-proxy" id=70df2113-fe24-433c-a653-392ac21bc475 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:35:02 newest-cni-250274 crio[835]: time="2025-10-18T09:35:02.387342954Z" level=info msg="Starting container: 8292e1538c20189ba38754acb8bfe3916bd9c98688febf5864efd566bc2c138d" id=54e4ad7f-04f3-4ae8-b287-0db08036f0f1 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:35:02 newest-cni-250274 crio[835]: time="2025-10-18T09:35:02.390389458Z" level=info msg="Started container" PID=1510 containerID=8292e1538c20189ba38754acb8bfe3916bd9c98688febf5864efd566bc2c138d description=kube-system/kube-proxy-w56ln/kube-proxy id=54e4ad7f-04f3-4ae8-b287-0db08036f0f1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=31206831cef5259fa1e88fbe3b142b2455e33a937e534cdf2ac20183eb491f9c
	Oct 18 09:35:02 newest-cni-250274 crio[835]: time="2025-10-18T09:35:02.401306495Z" level=info msg="Created container 1163c3e33d8433583f1f8cc3ed65041f6770c314b3f774fe6e60ba8ff1007084: kube-system/kindnet-p4pv8/kindnet-cni" id=653f1fe3-910c-47c0-85a8-b22d0016290a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:35:02 newest-cni-250274 crio[835]: time="2025-10-18T09:35:02.406398095Z" level=info msg="Starting container: 1163c3e33d8433583f1f8cc3ed65041f6770c314b3f774fe6e60ba8ff1007084" id=0329fd6e-c924-4981-bbd8-cb976a15e8ca name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:35:02 newest-cni-250274 crio[835]: time="2025-10-18T09:35:02.409053241Z" level=info msg="Started container" PID=1509 containerID=1163c3e33d8433583f1f8cc3ed65041f6770c314b3f774fe6e60ba8ff1007084 description=kube-system/kindnet-p4pv8/kindnet-cni id=0329fd6e-c924-4981-bbd8-cb976a15e8ca name=/runtime.v1.RuntimeService/StartContainer sandboxID=d34732fe855ca58b09e6b7d016e61073aaf5d7d10d65b6cd19763b1d0672d3b5
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	8292e1538c201       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   2 seconds ago       Running             kube-proxy                0                   31206831cef52       kube-proxy-w56ln                            kube-system
	1163c3e33d843       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   2 seconds ago       Running             kindnet-cni               0                   d34732fe855ca       kindnet-p4pv8                               kube-system
	9cefd72d4e865       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   16 seconds ago      Running             kube-controller-manager   0                   24ed650eb3179       kube-controller-manager-newest-cni-250274   kube-system
	6fef342edb63f       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   16 seconds ago      Running             kube-apiserver            0                   98d2f70eac1f1       kube-apiserver-newest-cni-250274            kube-system
	09e102e9dc61c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   16 seconds ago      Running             kube-scheduler            0                   ec1e832abbab0       kube-scheduler-newest-cni-250274            kube-system
	44247f1235ddf       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   16 seconds ago      Running             etcd                      0                   54931a69c1db2       etcd-newest-cni-250274                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-250274
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-250274
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820
	                    minikube.k8s.io/name=newest-cni-250274
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_34_56_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:34:52 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-250274
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:34:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:34:55 +0000   Sat, 18 Oct 2025 09:34:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:34:55 +0000   Sat, 18 Oct 2025 09:34:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:34:55 +0000   Sat, 18 Oct 2025 09:34:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 18 Oct 2025 09:34:55 +0000   Sat, 18 Oct 2025 09:34:48 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-250274
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                c687e818-f7ce-4926-9d94-118c26727656
	  Boot ID:                    65523ab2-bf15-4ba4-9086-da57024e96a9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-250274                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         9s
	  kube-system                 kindnet-p4pv8                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-250274             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-controller-manager-newest-cni-250274    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-proxy-w56ln                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-250274             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 1s    kube-proxy       
	  Normal   Starting                 9s    kubelet          Starting kubelet.
	  Warning  CgroupV1                 9s    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9s    kubelet          Node newest-cni-250274 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s    kubelet          Node newest-cni-250274 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s    kubelet          Node newest-cni-250274 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4s    node-controller  Node newest-cni-250274 event: Registered Node newest-cni-250274 in Controller
	
	
	==> dmesg <==
	[Oct18 09:13] overlayfs: idmapped layers are currently not supported
	[  +9.741593] overlayfs: idmapped layers are currently not supported
	[Oct18 09:14] overlayfs: idmapped layers are currently not supported
	[ +29.155522] overlayfs: idmapped layers are currently not supported
	[Oct18 09:15] overlayfs: idmapped layers are currently not supported
	[ +11.661984] kauditd_printk_skb: 8 callbacks suppressed
	[ +12.494912] overlayfs: idmapped layers are currently not supported
	[Oct18 09:16] overlayfs: idmapped layers are currently not supported
	[Oct18 09:18] overlayfs: idmapped layers are currently not supported
	[Oct18 09:20] overlayfs: idmapped layers are currently not supported
	[  +1.396393] overlayfs: idmapped layers are currently not supported
	[Oct18 09:22] overlayfs: idmapped layers are currently not supported
	[ +27.782946] overlayfs: idmapped layers are currently not supported
	[Oct18 09:25] overlayfs: idmapped layers are currently not supported
	[Oct18 09:26] overlayfs: idmapped layers are currently not supported
	[Oct18 09:27] overlayfs: idmapped layers are currently not supported
	[Oct18 09:28] overlayfs: idmapped layers are currently not supported
	[ +34.309418] overlayfs: idmapped layers are currently not supported
	[Oct18 09:30] overlayfs: idmapped layers are currently not supported
	[Oct18 09:31] overlayfs: idmapped layers are currently not supported
	[ +30.479187] overlayfs: idmapped layers are currently not supported
	[Oct18 09:32] overlayfs: idmapped layers are currently not supported
	[Oct18 09:33] overlayfs: idmapped layers are currently not supported
	[Oct18 09:34] overlayfs: idmapped layers are currently not supported
	[ +34.458375] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [44247f1235ddf98b993b7d76973a62f45a0eaeea245e70de45de4d32a8afb782] <==
	{"level":"warn","ts":"2025-10-18T09:34:51.385397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:34:51.424959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:34:51.450356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:34:51.471786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:34:51.504356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:34:51.522118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:34:51.537163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:34:51.557143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:34:51.572396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:34:51.597899Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:34:51.608195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:34:51.627061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:34:51.644552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:34:51.661211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:34:51.685448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:34:51.699069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:34:51.737359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:34:51.744684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:34:51.760445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:34:51.777377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:34:51.796838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:34:51.827142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:34:51.849424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:34:51.861213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:34:51.974896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59684","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:35:04 up 11:17,  0 user,  load average: 2.84, 3.16, 2.70
	Linux newest-cni-250274 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1163c3e33d8433583f1f8cc3ed65041f6770c314b3f774fe6e60ba8ff1007084] <==
	I1018 09:35:02.521464       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:35:02.521726       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 09:35:02.521845       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:35:02.521856       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:35:02.521868       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:35:02Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:35:02.809080       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:35:02.809261       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:35:02.809303       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:35:02.809709       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [6fef342edb63f6641da8337a40321deacc6bde303fd97e200fe79122ac67b13c] <==
	E1018 09:34:53.026160       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1018 09:34:53.070428       1 controller.go:667] quota admission added evaluator for: namespaces
	E1018 09:34:53.095731       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1018 09:34:53.114756       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:34:53.132109       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 09:34:53.171625       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:34:53.173923       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 09:34:53.228389       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:34:53.705141       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 09:34:53.713387       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 09:34:53.713505       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:34:54.726070       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:34:54.780124       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:34:54.900009       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 09:34:54.915417       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1018 09:34:54.917155       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 09:34:54.931604       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 09:34:55.679740       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 09:34:55.686845       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 09:34:55.704170       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 09:34:55.717630       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 09:35:01.490906       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:35:01.496189       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:35:01.541301       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 09:35:01.638435       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [9cefd72d4e865e32eba03fae6529f474456a8ae4585dff4fe11aa04893b460d3] <==
	I1018 09:35:00.776787       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:35:00.780739       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 09:35:00.780791       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 09:35:00.781891       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 09:35:00.782130       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 09:35:00.782283       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 09:35:00.782349       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 09:35:00.782447       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:35:00.782460       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 09:35:00.782467       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 09:35:00.782548       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1018 09:35:00.782982       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 09:35:00.783062       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-250274"
	I1018 09:35:00.783099       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1018 09:35:00.783693       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 09:35:00.784217       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 09:35:00.784324       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 09:35:00.789943       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 09:35:00.790044       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 09:35:00.790068       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 09:35:00.790086       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 09:35:00.790100       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 09:35:00.790395       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 09:35:00.806504       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-250274" podCIDRs=["10.42.0.0/24"]
	I1018 09:35:00.811718       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	
	
	==> kube-proxy [8292e1538c20189ba38754acb8bfe3916bd9c98688febf5864efd566bc2c138d] <==
	I1018 09:35:02.469346       1 server_linux.go:53] "Using iptables proxy"
	I1018 09:35:02.592840       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 09:35:02.706091       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:35:02.706127       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 09:35:02.706201       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:35:02.858952       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:35:02.859222       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:35:02.964087       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:35:02.964457       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:35:02.964662       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:35:02.966065       1 config.go:200] "Starting service config controller"
	I1018 09:35:02.966129       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:35:02.966186       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:35:02.966231       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:35:02.966269       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:35:02.966329       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:35:02.970971       1 config.go:309] "Starting node config controller"
	I1018 09:35:02.971050       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:35:02.971099       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 09:35:03.066530       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 09:35:03.066568       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 09:35:03.066617       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [09e102e9dc61c4689ffa7dcfd20bb752c54afc93a1a0f4ef9a6d07aa71e846b8] <==
	I1018 09:34:52.167854       1 serving.go:386] Generated self-signed cert in-memory
	W1018 09:34:54.410150       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 09:34:54.410255       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 09:34:54.410292       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 09:34:54.410323       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 09:34:54.436674       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 09:34:54.436801       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:34:54.439189       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 09:34:54.439552       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:34:54.447010       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:34:54.439574       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1018 09:34:54.462809       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1018 09:34:55.947370       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:34:56 newest-cni-250274 kubelet[1323]: I1018 09:34:56.649202    1323 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 18 09:34:56 newest-cni-250274 kubelet[1323]: I1018 09:34:56.733609    1323 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-250274"
	Oct 18 09:34:56 newest-cni-250274 kubelet[1323]: I1018 09:34:56.734047    1323 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-250274"
	Oct 18 09:34:56 newest-cni-250274 kubelet[1323]: I1018 09:34:56.736688    1323 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-250274"
	Oct 18 09:34:56 newest-cni-250274 kubelet[1323]: E1018 09:34:56.766846    1323 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-250274\" already exists" pod="kube-system/etcd-newest-cni-250274"
	Oct 18 09:34:56 newest-cni-250274 kubelet[1323]: E1018 09:34:56.767325    1323 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-250274\" already exists" pod="kube-system/kube-apiserver-newest-cni-250274"
	Oct 18 09:34:56 newest-cni-250274 kubelet[1323]: E1018 09:34:56.767720    1323 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-250274\" already exists" pod="kube-system/kube-controller-manager-newest-cni-250274"
	Oct 18 09:34:56 newest-cni-250274 kubelet[1323]: I1018 09:34:56.801106    1323 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-250274" podStartSLOduration=1.80108748 podStartE2EDuration="1.80108748s" podCreationTimestamp="2025-10-18 09:34:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:34:56.782452356 +0000 UTC m=+1.248455714" watchObservedRunningTime="2025-10-18 09:34:56.80108748 +0000 UTC m=+1.267090821"
	Oct 18 09:34:56 newest-cni-250274 kubelet[1323]: I1018 09:34:56.801282    1323 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-250274" podStartSLOduration=1.801276275 podStartE2EDuration="1.801276275s" podCreationTimestamp="2025-10-18 09:34:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:34:56.800873176 +0000 UTC m=+1.266876542" watchObservedRunningTime="2025-10-18 09:34:56.801276275 +0000 UTC m=+1.267279616"
	Oct 18 09:34:56 newest-cni-250274 kubelet[1323]: I1018 09:34:56.830375    1323 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-250274" podStartSLOduration=1.830357655 podStartE2EDuration="1.830357655s" podCreationTimestamp="2025-10-18 09:34:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:34:56.815655624 +0000 UTC m=+1.281658989" watchObservedRunningTime="2025-10-18 09:34:56.830357655 +0000 UTC m=+1.296360996"
	Oct 18 09:34:56 newest-cni-250274 kubelet[1323]: I1018 09:34:56.844898    1323 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-250274" podStartSLOduration=1.844882411 podStartE2EDuration="1.844882411s" podCreationTimestamp="2025-10-18 09:34:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:34:56.830645417 +0000 UTC m=+1.296648799" watchObservedRunningTime="2025-10-18 09:34:56.844882411 +0000 UTC m=+1.310885752"
	Oct 18 09:35:00 newest-cni-250274 kubelet[1323]: I1018 09:35:00.821492    1323 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 18 09:35:00 newest-cni-250274 kubelet[1323]: I1018 09:35:00.822595    1323 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 18 09:35:01 newest-cni-250274 kubelet[1323]: I1018 09:35:01.800809    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/84d08ca5-9902-4380-bd4e-2aac486b22e6-lib-modules\") pod \"kube-proxy-w56ln\" (UID: \"84d08ca5-9902-4380-bd4e-2aac486b22e6\") " pod="kube-system/kube-proxy-w56ln"
	Oct 18 09:35:01 newest-cni-250274 kubelet[1323]: I1018 09:35:01.800855    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48x5f\" (UniqueName: \"kubernetes.io/projected/84d08ca5-9902-4380-bd4e-2aac486b22e6-kube-api-access-48x5f\") pod \"kube-proxy-w56ln\" (UID: \"84d08ca5-9902-4380-bd4e-2aac486b22e6\") " pod="kube-system/kube-proxy-w56ln"
	Oct 18 09:35:01 newest-cni-250274 kubelet[1323]: I1018 09:35:01.800892    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/7a400bc4-76f3-4503-b82a-52b0cabbb2a3-cni-cfg\") pod \"kindnet-p4pv8\" (UID: \"7a400bc4-76f3-4503-b82a-52b0cabbb2a3\") " pod="kube-system/kindnet-p4pv8"
	Oct 18 09:35:01 newest-cni-250274 kubelet[1323]: I1018 09:35:01.800912    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a400bc4-76f3-4503-b82a-52b0cabbb2a3-xtables-lock\") pod \"kindnet-p4pv8\" (UID: \"7a400bc4-76f3-4503-b82a-52b0cabbb2a3\") " pod="kube-system/kindnet-p4pv8"
	Oct 18 09:35:01 newest-cni-250274 kubelet[1323]: I1018 09:35:01.800938    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a400bc4-76f3-4503-b82a-52b0cabbb2a3-lib-modules\") pod \"kindnet-p4pv8\" (UID: \"7a400bc4-76f3-4503-b82a-52b0cabbb2a3\") " pod="kube-system/kindnet-p4pv8"
	Oct 18 09:35:01 newest-cni-250274 kubelet[1323]: I1018 09:35:01.800983    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54v5q\" (UniqueName: \"kubernetes.io/projected/7a400bc4-76f3-4503-b82a-52b0cabbb2a3-kube-api-access-54v5q\") pod \"kindnet-p4pv8\" (UID: \"7a400bc4-76f3-4503-b82a-52b0cabbb2a3\") " pod="kube-system/kindnet-p4pv8"
	Oct 18 09:35:01 newest-cni-250274 kubelet[1323]: I1018 09:35:01.801012    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/84d08ca5-9902-4380-bd4e-2aac486b22e6-kube-proxy\") pod \"kube-proxy-w56ln\" (UID: \"84d08ca5-9902-4380-bd4e-2aac486b22e6\") " pod="kube-system/kube-proxy-w56ln"
	Oct 18 09:35:01 newest-cni-250274 kubelet[1323]: I1018 09:35:01.801041    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/84d08ca5-9902-4380-bd4e-2aac486b22e6-xtables-lock\") pod \"kube-proxy-w56ln\" (UID: \"84d08ca5-9902-4380-bd4e-2aac486b22e6\") " pod="kube-system/kube-proxy-w56ln"
	Oct 18 09:35:01 newest-cni-250274 kubelet[1323]: I1018 09:35:01.998613    1323 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 18 09:35:02 newest-cni-250274 kubelet[1323]: W1018 09:35:02.306190    1323 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3f010420231a13bf1bf2f85b36b5b332c4bb0a0624b2e0eaeb9d03bc23b53aa4/crio-d34732fe855ca58b09e6b7d016e61073aaf5d7d10d65b6cd19763b1d0672d3b5 WatchSource:0}: Error finding container d34732fe855ca58b09e6b7d016e61073aaf5d7d10d65b6cd19763b1d0672d3b5: Status 404 returned error can't find the container with id d34732fe855ca58b09e6b7d016e61073aaf5d7d10d65b6cd19763b1d0672d3b5
	Oct 18 09:35:02 newest-cni-250274 kubelet[1323]: I1018 09:35:02.777351    1323 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-w56ln" podStartSLOduration=1.777299165 podStartE2EDuration="1.777299165s" podCreationTimestamp="2025-10-18 09:35:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:35:02.777010353 +0000 UTC m=+7.243013702" watchObservedRunningTime="2025-10-18 09:35:02.777299165 +0000 UTC m=+7.243302506"
	Oct 18 09:35:03 newest-cni-250274 kubelet[1323]: I1018 09:35:03.814117    1323 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-p4pv8" podStartSLOduration=2.8140966450000002 podStartE2EDuration="2.814096645s" podCreationTimestamp="2025-10-18 09:35:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:35:02.820080521 +0000 UTC m=+7.286083870" watchObservedRunningTime="2025-10-18 09:35:03.814096645 +0000 UTC m=+8.280099994"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-250274 -n newest-cni-250274
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-250274 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-g7kfg storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-250274 describe pod coredns-66bc5c9577-g7kfg storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-250274 describe pod coredns-66bc5c9577-g7kfg storage-provisioner: exit status 1 (90.593782ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-g7kfg" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-250274 describe pod coredns-66bc5c9577-g7kfg storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-593480 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-593480 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (353.059172ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:35:23Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-593480 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-593480 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-593480 describe deploy/metrics-server -n kube-system: exit status 1 (103.061866ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-593480 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-593480
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-593480:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bfa509b1b05342b217b1ce963a2db530ef024766b92757db9c1cb5a7153e9679",
	        "Created": "2025-10-18T09:33:54.439784864Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1475090,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:33:54.504730542Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/bfa509b1b05342b217b1ce963a2db530ef024766b92757db9c1cb5a7153e9679/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bfa509b1b05342b217b1ce963a2db530ef024766b92757db9c1cb5a7153e9679/hostname",
	        "HostsPath": "/var/lib/docker/containers/bfa509b1b05342b217b1ce963a2db530ef024766b92757db9c1cb5a7153e9679/hosts",
	        "LogPath": "/var/lib/docker/containers/bfa509b1b05342b217b1ce963a2db530ef024766b92757db9c1cb5a7153e9679/bfa509b1b05342b217b1ce963a2db530ef024766b92757db9c1cb5a7153e9679-json.log",
	        "Name": "/default-k8s-diff-port-593480",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-593480:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-593480",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bfa509b1b05342b217b1ce963a2db530ef024766b92757db9c1cb5a7153e9679",
	                "LowerDir": "/var/lib/docker/overlay2/29fb3a737dd79177123811bfa1da8769085432f42a0fccd01372ece37a6300a7-init/diff:/var/lib/docker/overlay2/60519750c2737db0dfdb37adf468f6414129c2b5dcb7218ea59afbc63bfd537f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/29fb3a737dd79177123811bfa1da8769085432f42a0fccd01372ece37a6300a7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/29fb3a737dd79177123811bfa1da8769085432f42a0fccd01372ece37a6300a7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/29fb3a737dd79177123811bfa1da8769085432f42a0fccd01372ece37a6300a7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-593480",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-593480/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-593480",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-593480",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-593480",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e7465319aace3cbcff73ff24c7dae98cd7cff515c71a88494759481bee61a346",
	            "SandboxKey": "/var/run/docker/netns/e7465319aace",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34901"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34902"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34905"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34903"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34904"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-593480": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:13:ca:2f:01:07",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1dd19821ca12a42bf31368ca6b87d68bd1622c2ff94469b47f038636ec26347a",
	                    "EndpointID": "08f2b430a97b37438aa0e76bd4c1fc0539268409ee5ac775c6eb2623ff08e5eb",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-593480",
	                        "bfa509b1b053"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-593480 -n default-k8s-diff-port-593480
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-593480 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-593480 logs -n 25: (1.800579795s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p embed-certs-559379 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:31 UTC │ 18 Oct 25 09:32 UTC │
	│ addons  │ enable metrics-server -p no-preload-886951 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │                     │
	│ stop    │ -p no-preload-886951 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │ 18 Oct 25 09:32 UTC │
	│ addons  │ enable dashboard -p no-preload-886951 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │ 18 Oct 25 09:32 UTC │
	│ start   │ -p no-preload-886951 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │ 18 Oct 25 09:33 UTC │
	│ addons  │ enable metrics-server -p embed-certs-559379 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │                     │
	│ stop    │ -p embed-certs-559379 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │ 18 Oct 25 09:33 UTC │
	│ addons  │ enable dashboard -p embed-certs-559379 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:33 UTC │
	│ start   │ -p embed-certs-559379 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:33 UTC │
	│ image   │ no-preload-886951 image list --format=json                                                                                                                                                                                                    │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:33 UTC │
	│ pause   │ -p no-preload-886951 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │                     │
	│ delete  │ -p no-preload-886951                                                                                                                                                                                                                          │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:33 UTC │
	│ delete  │ -p no-preload-886951                                                                                                                                                                                                                          │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:33 UTC │
	│ delete  │ -p disable-driver-mounts-877810                                                                                                                                                                                                               │ disable-driver-mounts-877810 │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:33 UTC │
	│ start   │ -p default-k8s-diff-port-593480 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-593480 │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:35 UTC │
	│ image   │ embed-certs-559379 image list --format=json                                                                                                                                                                                                   │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │ 18 Oct 25 09:34 UTC │
	│ pause   │ -p embed-certs-559379 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │                     │
	│ delete  │ -p embed-certs-559379                                                                                                                                                                                                                         │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │ 18 Oct 25 09:34 UTC │
	│ delete  │ -p embed-certs-559379                                                                                                                                                                                                                         │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │ 18 Oct 25 09:34 UTC │
	│ start   │ -p newest-cni-250274 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-250274            │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │ 18 Oct 25 09:35 UTC │
	│ addons  │ enable metrics-server -p newest-cni-250274 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-250274            │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │                     │
	│ stop    │ -p newest-cni-250274 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-250274            │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │ 18 Oct 25 09:35 UTC │
	│ addons  │ enable dashboard -p newest-cni-250274 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-250274            │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │ 18 Oct 25 09:35 UTC │
	│ start   │ -p newest-cni-250274 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-250274            │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-593480 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-593480 │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:35:07
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:35:07.451459 1481740 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:35:07.451656 1481740 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:35:07.451667 1481740 out.go:374] Setting ErrFile to fd 2...
	I1018 09:35:07.451672 1481740 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:35:07.452023 1481740 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 09:35:07.452457 1481740 out.go:368] Setting JSON to false
	I1018 09:35:07.453521 1481740 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":40655,"bootTime":1760739453,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1018 09:35:07.453595 1481740 start.go:141] virtualization:  
	I1018 09:35:07.456720 1481740 out.go:179] * [newest-cni-250274] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 09:35:07.460746 1481740 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 09:35:07.460877 1481740 notify.go:220] Checking for updates...
	I1018 09:35:07.467201 1481740 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:35:07.470266 1481740 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 09:35:07.473319 1481740 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-1274243/.minikube
	I1018 09:35:07.476459 1481740 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 09:35:07.479356 1481740 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:35:07.482864 1481740 config.go:182] Loaded profile config "newest-cni-250274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:35:07.483467 1481740 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:35:07.517721 1481740 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 09:35:07.517837 1481740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:35:07.573757 1481740 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 09:35:07.564612032 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:35:07.573865 1481740 docker.go:318] overlay module found
	I1018 09:35:07.579110 1481740 out.go:179] * Using the docker driver based on existing profile
	I1018 09:35:07.581870 1481740 start.go:305] selected driver: docker
	I1018 09:35:07.581888 1481740 start.go:925] validating driver "docker" against &{Name:newest-cni-250274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-250274 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:35:07.581991 1481740 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:35:07.582704 1481740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:35:07.639922 1481740 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 09:35:07.63089103 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:35:07.640270 1481740 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 09:35:07.640306 1481740 cni.go:84] Creating CNI manager for ""
	I1018 09:35:07.640366 1481740 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:35:07.640410 1481740 start.go:349] cluster config:
	{Name:newest-cni-250274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-250274 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:35:07.643508 1481740 out.go:179] * Starting "newest-cni-250274" primary control-plane node in "newest-cni-250274" cluster
	I1018 09:35:07.646302 1481740 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:35:07.649050 1481740 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:35:07.651891 1481740 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:35:07.651979 1481740 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 09:35:07.651995 1481740 cache.go:58] Caching tarball of preloaded images
	I1018 09:35:07.652061 1481740 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:35:07.652302 1481740 preload.go:233] Found /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 09:35:07.652313 1481740 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 09:35:07.652431 1481740 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/config.json ...
	I1018 09:35:07.672327 1481740 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:35:07.672353 1481740 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:35:07.672373 1481740 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:35:07.672397 1481740 start.go:360] acquireMachinesLock for newest-cni-250274: {Name:mk472d1fdef0a7773f022c5286349dcbff699ada Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:35:07.672472 1481740 start.go:364] duration metric: took 48.179µs to acquireMachinesLock for "newest-cni-250274"
	I1018 09:35:07.672495 1481740 start.go:96] Skipping create...Using existing machine configuration
	I1018 09:35:07.672506 1481740 fix.go:54] fixHost starting: 
	I1018 09:35:07.672769 1481740 cli_runner.go:164] Run: docker container inspect newest-cni-250274 --format={{.State.Status}}
	I1018 09:35:07.689184 1481740 fix.go:112] recreateIfNeeded on newest-cni-250274: state=Stopped err=<nil>
	W1018 09:35:07.689214 1481740 fix.go:138] unexpected machine state, will restart: <nil>
	W1018 09:35:04.429700 1474687 node_ready.go:57] node "default-k8s-diff-port-593480" has "Ready":"False" status (will retry)
	W1018 09:35:06.430055 1474687 node_ready.go:57] node "default-k8s-diff-port-593480" has "Ready":"False" status (will retry)
	I1018 09:35:07.692361 1481740 out.go:252] * Restarting existing docker container for "newest-cni-250274" ...
	I1018 09:35:07.692442 1481740 cli_runner.go:164] Run: docker start newest-cni-250274
	I1018 09:35:07.935274 1481740 cli_runner.go:164] Run: docker container inspect newest-cni-250274 --format={{.State.Status}}
	I1018 09:35:07.957316 1481740 kic.go:430] container "newest-cni-250274" state is running.
	I1018 09:35:07.957748 1481740 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-250274
	I1018 09:35:07.979159 1481740 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/config.json ...
	I1018 09:35:07.979391 1481740 machine.go:93] provisionDockerMachine start ...
	I1018 09:35:07.979451 1481740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250274
	I1018 09:35:08.003355 1481740 main.go:141] libmachine: Using SSH client type: native
	I1018 09:35:08.003689 1481740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34911 <nil> <nil>}
	I1018 09:35:08.003699 1481740 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:35:08.004657 1481740 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 09:35:11.179820 1481740 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-250274
	
	I1018 09:35:11.179940 1481740 ubuntu.go:182] provisioning hostname "newest-cni-250274"
	I1018 09:35:11.180047 1481740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250274
	I1018 09:35:11.206494 1481740 main.go:141] libmachine: Using SSH client type: native
	I1018 09:35:11.206893 1481740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34911 <nil> <nil>}
	I1018 09:35:11.206920 1481740 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-250274 && echo "newest-cni-250274" | sudo tee /etc/hostname
	I1018 09:35:11.382677 1481740 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-250274
	
	I1018 09:35:11.382843 1481740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250274
	I1018 09:35:11.410095 1481740 main.go:141] libmachine: Using SSH client type: native
	I1018 09:35:11.410409 1481740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34911 <nil> <nil>}
	I1018 09:35:11.410427 1481740 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-250274' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-250274/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-250274' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:35:11.576823 1481740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:35:11.576848 1481740 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-1274243/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-1274243/.minikube}
	I1018 09:35:11.576870 1481740 ubuntu.go:190] setting up certificates
	I1018 09:35:11.576879 1481740 provision.go:84] configureAuth start
	I1018 09:35:11.576951 1481740 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-250274
	I1018 09:35:11.595769 1481740 provision.go:143] copyHostCerts
	I1018 09:35:11.595828 1481740 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem, removing ...
	I1018 09:35:11.596013 1481740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem
	I1018 09:35:11.596107 1481740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem (1078 bytes)
	I1018 09:35:11.596223 1481740 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem, removing ...
	I1018 09:35:11.596229 1481740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem
	I1018 09:35:11.596257 1481740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem (1123 bytes)
	I1018 09:35:11.596318 1481740 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem, removing ...
	I1018 09:35:11.596323 1481740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem
	I1018 09:35:11.596346 1481740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem (1675 bytes)
	I1018 09:35:11.596401 1481740 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem org=jenkins.newest-cni-250274 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-250274]
	I1018 09:35:12.355708 1481740 provision.go:177] copyRemoteCerts
	I1018 09:35:12.355779 1481740 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:35:12.355831 1481740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250274
	I1018 09:35:12.375529 1481740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34911 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/newest-cni-250274/id_rsa Username:docker}
	W1018 09:35:08.929925 1474687 node_ready.go:57] node "default-k8s-diff-port-593480" has "Ready":"False" status (will retry)
	I1018 09:35:10.929268 1474687 node_ready.go:49] node "default-k8s-diff-port-593480" is "Ready"
	I1018 09:35:10.929306 1474687 node_ready.go:38] duration metric: took 41.002800702s for node "default-k8s-diff-port-593480" to be "Ready" ...
	I1018 09:35:10.929321 1474687 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:35:10.929387 1474687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:35:10.943201 1474687 api_server.go:72] duration metric: took 42.289449947s to wait for apiserver process to appear ...
	I1018 09:35:10.943224 1474687 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:35:10.943243 1474687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1018 09:35:10.963991 1474687 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1018 09:35:10.965001 1474687 api_server.go:141] control plane version: v1.34.1
	I1018 09:35:10.965026 1474687 api_server.go:131] duration metric: took 21.794732ms to wait for apiserver health ...
	I1018 09:35:10.965035 1474687 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:35:10.968142 1474687 system_pods.go:59] 8 kube-system pods found
	I1018 09:35:10.968179 1474687 system_pods.go:61] "coredns-66bc5c9577-lxwgf" [7dfe7cc5-827f-4a29-932a-943c05bc729e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:35:10.968187 1474687 system_pods.go:61] "etcd-default-k8s-diff-port-593480" [9c3866d1-8a94-420d-ac51-55bcf2f955e1] Running
	I1018 09:35:10.968193 1474687 system_pods.go:61] "kindnet-ptbw6" [5fa3779f-2d5f-4303-8f6b-af5ae96f1fae] Running
	I1018 09:35:10.968198 1474687 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-593480" [35443851-03a6-43b4-b827-6dcd89b14052] Running
	I1018 09:35:10.968204 1474687 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-593480" [9e1e3689-cdbf-48cf-8c1e-4a55e905811d] Running
	I1018 09:35:10.968210 1474687 system_pods.go:61] "kube-proxy-lz9p5" [df6ea9c5-3f27-4e58-be1b-c6f47b71aa63] Running
	I1018 09:35:10.968221 1474687 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-593480" [ab2397f7-d4be-4bc7-98eb-b9ddb0e6a9a2] Running
	I1018 09:35:10.968237 1474687 system_pods.go:61] "storage-provisioner" [da9f578c-74b8-40c2-a810-245c70e07eae] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:35:10.968256 1474687 system_pods.go:74] duration metric: took 3.214188ms to wait for pod list to return data ...
	I1018 09:35:10.968265 1474687 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:35:10.970910 1474687 default_sa.go:45] found service account: "default"
	I1018 09:35:10.970940 1474687 default_sa.go:55] duration metric: took 2.66185ms for default service account to be created ...
	I1018 09:35:10.970949 1474687 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 09:35:10.973952 1474687 system_pods.go:86] 8 kube-system pods found
	I1018 09:35:10.973988 1474687 system_pods.go:89] "coredns-66bc5c9577-lxwgf" [7dfe7cc5-827f-4a29-932a-943c05bc729e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:35:10.973995 1474687 system_pods.go:89] "etcd-default-k8s-diff-port-593480" [9c3866d1-8a94-420d-ac51-55bcf2f955e1] Running
	I1018 09:35:10.974001 1474687 system_pods.go:89] "kindnet-ptbw6" [5fa3779f-2d5f-4303-8f6b-af5ae96f1fae] Running
	I1018 09:35:10.974006 1474687 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-593480" [35443851-03a6-43b4-b827-6dcd89b14052] Running
	I1018 09:35:10.974011 1474687 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-593480" [9e1e3689-cdbf-48cf-8c1e-4a55e905811d] Running
	I1018 09:35:10.974015 1474687 system_pods.go:89] "kube-proxy-lz9p5" [df6ea9c5-3f27-4e58-be1b-c6f47b71aa63] Running
	I1018 09:35:10.974020 1474687 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-593480" [ab2397f7-d4be-4bc7-98eb-b9ddb0e6a9a2] Running
	I1018 09:35:10.974032 1474687 system_pods.go:89] "storage-provisioner" [da9f578c-74b8-40c2-a810-245c70e07eae] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:35:10.974053 1474687 retry.go:31] will retry after 221.086539ms: missing components: kube-dns
	I1018 09:35:11.227378 1474687 system_pods.go:86] 8 kube-system pods found
	I1018 09:35:11.227412 1474687 system_pods.go:89] "coredns-66bc5c9577-lxwgf" [7dfe7cc5-827f-4a29-932a-943c05bc729e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:35:11.227419 1474687 system_pods.go:89] "etcd-default-k8s-diff-port-593480" [9c3866d1-8a94-420d-ac51-55bcf2f955e1] Running
	I1018 09:35:11.227426 1474687 system_pods.go:89] "kindnet-ptbw6" [5fa3779f-2d5f-4303-8f6b-af5ae96f1fae] Running
	I1018 09:35:11.227430 1474687 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-593480" [35443851-03a6-43b4-b827-6dcd89b14052] Running
	I1018 09:35:11.227434 1474687 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-593480" [9e1e3689-cdbf-48cf-8c1e-4a55e905811d] Running
	I1018 09:35:11.227438 1474687 system_pods.go:89] "kube-proxy-lz9p5" [df6ea9c5-3f27-4e58-be1b-c6f47b71aa63] Running
	I1018 09:35:11.227445 1474687 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-593480" [ab2397f7-d4be-4bc7-98eb-b9ddb0e6a9a2] Running
	I1018 09:35:11.227450 1474687 system_pods.go:89] "storage-provisioner" [da9f578c-74b8-40c2-a810-245c70e07eae] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:35:11.227465 1474687 retry.go:31] will retry after 359.059247ms: missing components: kube-dns
	I1018 09:35:11.591651 1474687 system_pods.go:86] 8 kube-system pods found
	I1018 09:35:11.591680 1474687 system_pods.go:89] "coredns-66bc5c9577-lxwgf" [7dfe7cc5-827f-4a29-932a-943c05bc729e] Running
	I1018 09:35:11.591687 1474687 system_pods.go:89] "etcd-default-k8s-diff-port-593480" [9c3866d1-8a94-420d-ac51-55bcf2f955e1] Running
	I1018 09:35:11.591692 1474687 system_pods.go:89] "kindnet-ptbw6" [5fa3779f-2d5f-4303-8f6b-af5ae96f1fae] Running
	I1018 09:35:11.591696 1474687 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-593480" [35443851-03a6-43b4-b827-6dcd89b14052] Running
	I1018 09:35:11.591700 1474687 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-593480" [9e1e3689-cdbf-48cf-8c1e-4a55e905811d] Running
	I1018 09:35:11.591704 1474687 system_pods.go:89] "kube-proxy-lz9p5" [df6ea9c5-3f27-4e58-be1b-c6f47b71aa63] Running
	I1018 09:35:11.591708 1474687 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-593480" [ab2397f7-d4be-4bc7-98eb-b9ddb0e6a9a2] Running
	I1018 09:35:11.591711 1474687 system_pods.go:89] "storage-provisioner" [da9f578c-74b8-40c2-a810-245c70e07eae] Running
	I1018 09:35:11.591719 1474687 system_pods.go:126] duration metric: took 620.76266ms to wait for k8s-apps to be running ...
	I1018 09:35:11.591731 1474687 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 09:35:11.591788 1474687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:35:11.607085 1474687 system_svc.go:56] duration metric: took 15.349406ms WaitForService to wait for kubelet
	I1018 09:35:11.607109 1474687 kubeadm.go:586] duration metric: took 42.953363535s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:35:11.607128 1474687 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:35:11.610448 1474687 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 09:35:11.610475 1474687 node_conditions.go:123] node cpu capacity is 2
	I1018 09:35:11.610486 1474687 node_conditions.go:105] duration metric: took 3.353063ms to run NodePressure ...
	I1018 09:35:11.610498 1474687 start.go:241] waiting for startup goroutines ...
	I1018 09:35:11.610506 1474687 start.go:246] waiting for cluster config update ...
	I1018 09:35:11.610516 1474687 start.go:255] writing updated cluster config ...
	I1018 09:35:11.610802 1474687 ssh_runner.go:195] Run: rm -f paused
	I1018 09:35:11.619175 1474687 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:35:11.623267 1474687 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lxwgf" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:35:11.629186 1474687 pod_ready.go:94] pod "coredns-66bc5c9577-lxwgf" is "Ready"
	I1018 09:35:11.629210 1474687 pod_ready.go:86] duration metric: took 5.918899ms for pod "coredns-66bc5c9577-lxwgf" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:35:11.632132 1474687 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-593480" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:35:11.637675 1474687 pod_ready.go:94] pod "etcd-default-k8s-diff-port-593480" is "Ready"
	I1018 09:35:11.637748 1474687 pod_ready.go:86] duration metric: took 5.592771ms for pod "etcd-default-k8s-diff-port-593480" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:35:11.646445 1474687 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-593480" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:35:11.652737 1474687 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-593480" is "Ready"
	I1018 09:35:11.652766 1474687 pod_ready.go:86] duration metric: took 6.294159ms for pod "kube-apiserver-default-k8s-diff-port-593480" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:35:11.657197 1474687 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-593480" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:35:12.023696 1474687 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-593480" is "Ready"
	I1018 09:35:12.023723 1474687 pod_ready.go:86] duration metric: took 366.501267ms for pod "kube-controller-manager-default-k8s-diff-port-593480" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:35:12.225055 1474687 pod_ready.go:83] waiting for pod "kube-proxy-lz9p5" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:35:12.623349 1474687 pod_ready.go:94] pod "kube-proxy-lz9p5" is "Ready"
	I1018 09:35:12.623374 1474687 pod_ready.go:86] duration metric: took 398.289755ms for pod "kube-proxy-lz9p5" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:35:12.823706 1474687 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-593480" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:35:13.223594 1474687 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-593480" is "Ready"
	I1018 09:35:13.223626 1474687 pod_ready.go:86] duration metric: took 399.888669ms for pod "kube-scheduler-default-k8s-diff-port-593480" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:35:13.223639 1474687 pod_ready.go:40] duration metric: took 1.604415912s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:35:13.301877 1474687 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 09:35:13.305290 1474687 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-593480" cluster and "default" namespace by default
	I1018 09:35:12.481985 1481740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 09:35:12.500215 1481740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:35:12.518262 1481740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 09:35:12.535625 1481740 provision.go:87] duration metric: took 958.724947ms to configureAuth
	I1018 09:35:12.535656 1481740 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:35:12.535878 1481740 config.go:182] Loaded profile config "newest-cni-250274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:35:12.535994 1481740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250274
	I1018 09:35:12.554366 1481740 main.go:141] libmachine: Using SSH client type: native
	I1018 09:35:12.554803 1481740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34911 <nil> <nil>}
	I1018 09:35:12.554821 1481740 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:35:12.843291 1481740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:35:12.843313 1481740 machine.go:96] duration metric: took 4.863913345s to provisionDockerMachine
	I1018 09:35:12.843324 1481740 start.go:293] postStartSetup for "newest-cni-250274" (driver="docker")
	I1018 09:35:12.843334 1481740 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:35:12.843391 1481740 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:35:12.843449 1481740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250274
	I1018 09:35:12.861749 1481740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34911 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/newest-cni-250274/id_rsa Username:docker}
	I1018 09:35:12.964910 1481740 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:35:12.969111 1481740 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:35:12.969140 1481740 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:35:12.969151 1481740 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-1274243/.minikube/addons for local assets ...
	I1018 09:35:12.969229 1481740 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-1274243/.minikube/files for local assets ...
	I1018 09:35:12.969334 1481740 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem -> 12760972.pem in /etc/ssl/certs
	I1018 09:35:12.969489 1481740 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:35:12.977232 1481740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem --> /etc/ssl/certs/12760972.pem (1708 bytes)
	I1018 09:35:12.995598 1481740 start.go:296] duration metric: took 152.258132ms for postStartSetup
	I1018 09:35:12.995699 1481740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:35:12.995753 1481740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250274
	I1018 09:35:13.015253 1481740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34911 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/newest-cni-250274/id_rsa Username:docker}
	I1018 09:35:13.116800 1481740 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:35:13.121486 1481740 fix.go:56] duration metric: took 5.448972155s for fixHost
	I1018 09:35:13.121512 1481740 start.go:83] releasing machines lock for "newest-cni-250274", held for 5.449028423s
	I1018 09:35:13.121591 1481740 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-250274
	I1018 09:35:13.138663 1481740 ssh_runner.go:195] Run: cat /version.json
	I1018 09:35:13.138745 1481740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250274
	I1018 09:35:13.139088 1481740 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:35:13.139159 1481740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250274
	I1018 09:35:13.157849 1481740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34911 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/newest-cni-250274/id_rsa Username:docker}
	I1018 09:35:13.158304 1481740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34911 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/newest-cni-250274/id_rsa Username:docker}
	I1018 09:35:13.380597 1481740 ssh_runner.go:195] Run: systemctl --version
	I1018 09:35:13.387893 1481740 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:35:13.474779 1481740 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:35:13.480245 1481740 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:35:13.480317 1481740 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:35:13.489559 1481740 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 09:35:13.489580 1481740 start.go:495] detecting cgroup driver to use...
	I1018 09:35:13.489611 1481740 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 09:35:13.489658 1481740 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:35:13.506229 1481740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:35:13.530174 1481740 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:35:13.530234 1481740 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:35:13.549911 1481740 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:35:13.566716 1481740 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:35:13.759046 1481740 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:35:13.888862 1481740 docker.go:234] disabling docker service ...
	I1018 09:35:13.888950 1481740 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:35:13.905196 1481740 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:35:13.920613 1481740 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:35:14.084167 1481740 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:35:14.224030 1481740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:35:14.237413 1481740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:35:14.250763 1481740 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:35:14.250832 1481740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:14.259541 1481740 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 09:35:14.259610 1481740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:14.275347 1481740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:14.284139 1481740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:14.293584 1481740 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:35:14.301331 1481740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:14.316397 1481740 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:14.324990 1481740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:14.334447 1481740 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:35:14.343757 1481740 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:35:14.352870 1481740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:35:14.488953 1481740 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:35:14.626670 1481740 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:35:14.626738 1481740 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:35:14.631882 1481740 start.go:563] Will wait 60s for crictl version
	I1018 09:35:14.631943 1481740 ssh_runner.go:195] Run: which crictl
	I1018 09:35:14.635554 1481740 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:35:14.660118 1481740 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:35:14.660278 1481740 ssh_runner.go:195] Run: crio --version
	I1018 09:35:14.692419 1481740 ssh_runner.go:195] Run: crio --version
	I1018 09:35:14.724831 1481740 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:35:14.727979 1481740 cli_runner.go:164] Run: docker network inspect newest-cni-250274 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:35:14.745664 1481740 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 09:35:14.749471 1481740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:35:14.764773 1481740 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1018 09:35:14.768286 1481740 kubeadm.go:883] updating cluster {Name:newest-cni-250274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-250274 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:35:14.768419 1481740 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:35:14.768503 1481740 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:35:14.801828 1481740 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:35:14.801854 1481740 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:35:14.801911 1481740 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:35:14.826228 1481740 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:35:14.826251 1481740 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:35:14.826259 1481740 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1018 09:35:14.826360 1481740 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-250274 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-250274 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:35:14.826446 1481740 ssh_runner.go:195] Run: crio config
	I1018 09:35:14.905972 1481740 cni.go:84] Creating CNI manager for ""
	I1018 09:35:14.905993 1481740 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:35:14.906020 1481740 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1018 09:35:14.906044 1481740 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-250274 NodeName:newest-cni-250274 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:35:14.906187 1481740 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-250274"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:35:14.906344 1481740 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:35:14.914745 1481740 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:35:14.914860 1481740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:35:14.922783 1481740 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1018 09:35:14.936583 1481740 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:35:14.950976 1481740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1018 09:35:14.964788 1481740 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:35:14.968749 1481740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:35:14.978644 1481740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:35:15.109859 1481740 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:35:15.132454 1481740 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274 for IP: 192.168.76.2
	I1018 09:35:15.132477 1481740 certs.go:195] generating shared ca certs ...
	I1018 09:35:15.132494 1481740 certs.go:227] acquiring lock for ca certs: {Name:mk38b7543cc5a8f0209b20a590824c94ad8d0609 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:35:15.132690 1481740 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.key
	I1018 09:35:15.132760 1481740 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.key
	I1018 09:35:15.132775 1481740 certs.go:257] generating profile certs ...
	I1018 09:35:15.132897 1481740 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/client.key
	I1018 09:35:15.132989 1481740 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/apiserver.key.08fa8726
	I1018 09:35:15.133059 1481740 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/proxy-client.key
	I1018 09:35:15.133219 1481740 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097.pem (1338 bytes)
	W1018 09:35:15.133276 1481740 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097_empty.pem, impossibly tiny 0 bytes
	I1018 09:35:15.133293 1481740 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:35:15.133334 1481740 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:35:15.133379 1481740 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:35:15.133413 1481740 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem (1675 bytes)
	I1018 09:35:15.133491 1481740 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem (1708 bytes)
	I1018 09:35:15.134198 1481740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:35:15.158815 1481740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 09:35:15.184882 1481740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:35:15.209112 1481740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:35:15.230570 1481740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 09:35:15.255541 1481740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 09:35:15.303005 1481740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:35:15.334594 1481740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 09:35:15.354973 1481740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem --> /usr/share/ca-certificates/12760972.pem (1708 bytes)
	I1018 09:35:15.378445 1481740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:35:15.400905 1481740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097.pem --> /usr/share/ca-certificates/1276097.pem (1338 bytes)
	I1018 09:35:15.424111 1481740 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:35:15.439388 1481740 ssh_runner.go:195] Run: openssl version
	I1018 09:35:15.446286 1481740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:35:15.455046 1481740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:35:15.458845 1481740 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:35:15.458926 1481740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:35:15.504387 1481740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:35:15.512106 1481740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1276097.pem && ln -fs /usr/share/ca-certificates/1276097.pem /etc/ssl/certs/1276097.pem"
	I1018 09:35:15.520175 1481740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1276097.pem
	I1018 09:35:15.523930 1481740 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 08:36 /usr/share/ca-certificates/1276097.pem
	I1018 09:35:15.524021 1481740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1276097.pem
	I1018 09:35:15.565230 1481740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1276097.pem /etc/ssl/certs/51391683.0"
	I1018 09:35:15.573270 1481740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12760972.pem && ln -fs /usr/share/ca-certificates/12760972.pem /etc/ssl/certs/12760972.pem"
	I1018 09:35:15.581447 1481740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12760972.pem
	I1018 09:35:15.585095 1481740 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 08:36 /usr/share/ca-certificates/12760972.pem
	I1018 09:35:15.585159 1481740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12760972.pem
	I1018 09:35:15.627708 1481740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12760972.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:35:15.635390 1481740 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:35:15.639295 1481740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 09:35:15.691161 1481740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 09:35:15.743878 1481740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 09:35:15.798855 1481740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 09:35:15.901722 1481740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 09:35:15.990208 1481740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 09:35:16.123903 1481740 kubeadm.go:400] StartCluster: {Name:newest-cni-250274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-250274 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:35:16.124038 1481740 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:35:16.124128 1481740 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:35:16.203415 1481740 cri.go:89] found id: "838ef5430e58bb4a609136dfa74910535190f395496c2bd21432db44c19aaff4"
	I1018 09:35:16.203482 1481740 cri.go:89] found id: "52152d05aeb48008c167a0cc9d9f80e34c5ab6124747ccfbbf79ba25a61db69f"
	I1018 09:35:16.203501 1481740 cri.go:89] found id: "66052a766abf5dba4b7c9118f1e1e91be861206c216d0a3766c7fcebd6504824"
	I1018 09:35:16.203518 1481740 cri.go:89] found id: "89f5e6f41611e1935f1802e4ae146f223304dda14ce071d5b606ea7ceb35d965"
	I1018 09:35:16.203536 1481740 cri.go:89] found id: ""
	I1018 09:35:16.203612 1481740 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 09:35:16.224327 1481740 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:35:16Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:35:16.224461 1481740 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:35:16.238114 1481740 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 09:35:16.238179 1481740 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 09:35:16.238261 1481740 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 09:35:16.248851 1481740 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:35:16.249531 1481740 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-250274" does not appear in /home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 09:35:16.249863 1481740 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-1274243/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-250274" cluster setting kubeconfig missing "newest-cni-250274" context setting]
	I1018 09:35:16.250496 1481740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/kubeconfig: {Name:mk4be33efdd3c492a070116d2c0b6f8b234870aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:35:16.252747 1481740 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 09:35:16.278778 1481740 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1018 09:35:16.278819 1481740 kubeadm.go:601] duration metric: took 40.619299ms to restartPrimaryControlPlane
	I1018 09:35:16.278829 1481740 kubeadm.go:402] duration metric: took 154.935603ms to StartCluster
	I1018 09:35:16.278848 1481740 settings.go:142] acquiring lock: {Name:mk4240955247baece9b9f2d9a5a8e189cb089184 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:35:16.278924 1481740 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 09:35:16.279901 1481740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/kubeconfig: {Name:mk4be33efdd3c492a070116d2c0b6f8b234870aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:35:16.280119 1481740 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:35:16.280488 1481740 config.go:182] Loaded profile config "newest-cni-250274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:35:16.280563 1481740 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:35:16.280675 1481740 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-250274"
	I1018 09:35:16.280695 1481740 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-250274"
	W1018 09:35:16.280706 1481740 addons.go:247] addon storage-provisioner should already be in state true
	I1018 09:35:16.280723 1481740 addons.go:69] Setting dashboard=true in profile "newest-cni-250274"
	I1018 09:35:16.280768 1481740 addons.go:238] Setting addon dashboard=true in "newest-cni-250274"
	W1018 09:35:16.280798 1481740 addons.go:247] addon dashboard should already be in state true
	I1018 09:35:16.280727 1481740 host.go:66] Checking if "newest-cni-250274" exists ...
	I1018 09:35:16.280863 1481740 host.go:66] Checking if "newest-cni-250274" exists ...
	I1018 09:35:16.281319 1481740 cli_runner.go:164] Run: docker container inspect newest-cni-250274 --format={{.State.Status}}
	I1018 09:35:16.281589 1481740 cli_runner.go:164] Run: docker container inspect newest-cni-250274 --format={{.State.Status}}
	I1018 09:35:16.280734 1481740 addons.go:69] Setting default-storageclass=true in profile "newest-cni-250274"
	I1018 09:35:16.281807 1481740 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-250274"
	I1018 09:35:16.282555 1481740 cli_runner.go:164] Run: docker container inspect newest-cni-250274 --format={{.State.Status}}
	I1018 09:35:16.284931 1481740 out.go:179] * Verifying Kubernetes components...
	I1018 09:35:16.295995 1481740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:35:16.348048 1481740 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 09:35:16.351007 1481740 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 09:35:16.351060 1481740 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:35:16.352470 1481740 addons.go:238] Setting addon default-storageclass=true in "newest-cni-250274"
	W1018 09:35:16.352487 1481740 addons.go:247] addon default-storageclass should already be in state true
	I1018 09:35:16.352512 1481740 host.go:66] Checking if "newest-cni-250274" exists ...
	I1018 09:35:16.352956 1481740 cli_runner.go:164] Run: docker container inspect newest-cni-250274 --format={{.State.Status}}
	I1018 09:35:16.355937 1481740 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:35:16.355961 1481740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:35:16.356022 1481740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250274
	I1018 09:35:16.356163 1481740 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 09:35:16.356177 1481740 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 09:35:16.356219 1481740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250274
	I1018 09:35:16.401098 1481740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34911 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/newest-cni-250274/id_rsa Username:docker}
	I1018 09:35:16.406112 1481740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34911 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/newest-cni-250274/id_rsa Username:docker}
	I1018 09:35:16.417491 1481740 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:35:16.417513 1481740 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:35:16.417578 1481740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250274
	I1018 09:35:16.451582 1481740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34911 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/newest-cni-250274/id_rsa Username:docker}
	I1018 09:35:16.682103 1481740 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:35:16.699283 1481740 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:35:16.699381 1481740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:35:16.725360 1481740 api_server.go:72] duration metric: took 445.209726ms to wait for apiserver process to appear ...
	I1018 09:35:16.725426 1481740 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:35:16.725473 1481740 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:35:16.739521 1481740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:35:16.744868 1481740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:35:16.751681 1481740 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 09:35:16.751742 1481740 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 09:35:16.791579 1481740 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 09:35:16.791645 1481740 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 09:35:16.820337 1481740 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 09:35:16.820401 1481740 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 09:35:16.847089 1481740 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 09:35:16.847152 1481740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 09:35:16.910886 1481740 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 09:35:16.910961 1481740 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 09:35:16.991044 1481740 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 09:35:16.991119 1481740 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 09:35:17.042713 1481740 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 09:35:17.042798 1481740 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 09:35:17.058686 1481740 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 09:35:17.058760 1481740 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 09:35:17.077136 1481740 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:35:17.077208 1481740 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 09:35:17.095024 1481740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:35:21.727935 1481740 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1018 09:35:21.728037 1481740 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:35:22.260351 1481740 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1018 09:35:22.260421 1481740 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1018 09:35:22.260452 1481740 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:35:22.305300 1481740 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1018 09:35:22.305371 1481740 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	
	
	==> CRI-O <==
	Oct 18 09:35:11 default-k8s-diff-port-593480 crio[838]: time="2025-10-18T09:35:11.194697165Z" level=info msg="Created container f0073d4b3daf90c747dac786e97a9c77433f38ade49e63a9c6492a208c3f2112: kube-system/coredns-66bc5c9577-lxwgf/coredns" id=36772ca5-8c2c-44cd-888f-3699cf678c8d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:35:11 default-k8s-diff-port-593480 crio[838]: time="2025-10-18T09:35:11.197216381Z" level=info msg="Starting container: f0073d4b3daf90c747dac786e97a9c77433f38ade49e63a9c6492a208c3f2112" id=813b9ec4-c7a8-4372-8487-7c5ba5726210 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:35:11 default-k8s-diff-port-593480 crio[838]: time="2025-10-18T09:35:11.199293402Z" level=info msg="Started container" PID=1734 containerID=f0073d4b3daf90c747dac786e97a9c77433f38ade49e63a9c6492a208c3f2112 description=kube-system/coredns-66bc5c9577-lxwgf/coredns id=813b9ec4-c7a8-4372-8487-7c5ba5726210 name=/runtime.v1.RuntimeService/StartContainer sandboxID=50f4a8e9fa9dd5a49744bd0742a79b765f53037e792e1fb59dfd6ec66ba095cf
	Oct 18 09:35:13 default-k8s-diff-port-593480 crio[838]: time="2025-10-18T09:35:13.921635838Z" level=info msg="Running pod sandbox: default/busybox/POD" id=7e4ad170-9c65-4435-8cdb-f0768d9b9611 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:35:13 default-k8s-diff-port-593480 crio[838]: time="2025-10-18T09:35:13.922152386Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:35:13 default-k8s-diff-port-593480 crio[838]: time="2025-10-18T09:35:13.927504598Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:d4bd26e7c6cab1b26a83147915b478b7c4650456253d4ce69144a4633a680662 UID:b16ad816-3da6-4828-b35a-f8c0f32a7093 NetNS:/var/run/netns/574587e6-9bf9-4f83-bf8c-e1cbc37df8c7 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000790a0}] Aliases:map[]}"
	Oct 18 09:35:13 default-k8s-diff-port-593480 crio[838]: time="2025-10-18T09:35:13.92754472Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 18 09:35:13 default-k8s-diff-port-593480 crio[838]: time="2025-10-18T09:35:13.94893641Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:d4bd26e7c6cab1b26a83147915b478b7c4650456253d4ce69144a4633a680662 UID:b16ad816-3da6-4828-b35a-f8c0f32a7093 NetNS:/var/run/netns/574587e6-9bf9-4f83-bf8c-e1cbc37df8c7 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000790a0}] Aliases:map[]}"
	Oct 18 09:35:13 default-k8s-diff-port-593480 crio[838]: time="2025-10-18T09:35:13.949085699Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 18 09:35:13 default-k8s-diff-port-593480 crio[838]: time="2025-10-18T09:35:13.953086506Z" level=info msg="Ran pod sandbox d4bd26e7c6cab1b26a83147915b478b7c4650456253d4ce69144a4633a680662 with infra container: default/busybox/POD" id=7e4ad170-9c65-4435-8cdb-f0768d9b9611 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:35:13 default-k8s-diff-port-593480 crio[838]: time="2025-10-18T09:35:13.954267388Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=409c0b44-8ee6-4765-b686-e98dc90bfc9c name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:35:13 default-k8s-diff-port-593480 crio[838]: time="2025-10-18T09:35:13.954391651Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=409c0b44-8ee6-4765-b686-e98dc90bfc9c name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:35:13 default-k8s-diff-port-593480 crio[838]: time="2025-10-18T09:35:13.954435654Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=409c0b44-8ee6-4765-b686-e98dc90bfc9c name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:35:13 default-k8s-diff-port-593480 crio[838]: time="2025-10-18T09:35:13.957760066Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=18355445-8568-4f9e-a7e2-4715ce7f6132 name=/runtime.v1.ImageService/PullImage
	Oct 18 09:35:13 default-k8s-diff-port-593480 crio[838]: time="2025-10-18T09:35:13.961363231Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 18 09:35:16 default-k8s-diff-port-593480 crio[838]: time="2025-10-18T09:35:16.053739645Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=18355445-8568-4f9e-a7e2-4715ce7f6132 name=/runtime.v1.ImageService/PullImage
	Oct 18 09:35:16 default-k8s-diff-port-593480 crio[838]: time="2025-10-18T09:35:16.054847651Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=29b1773a-46cd-4b0b-afb6-da1519e16879 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:35:16 default-k8s-diff-port-593480 crio[838]: time="2025-10-18T09:35:16.058693649Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=283e1ed5-28d0-4039-828d-69674d179f16 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:35:16 default-k8s-diff-port-593480 crio[838]: time="2025-10-18T09:35:16.068576696Z" level=info msg="Creating container: default/busybox/busybox" id=225bae4d-73c6-49d4-a4a5-5c3481e7c191 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:35:16 default-k8s-diff-port-593480 crio[838]: time="2025-10-18T09:35:16.069560776Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:35:16 default-k8s-diff-port-593480 crio[838]: time="2025-10-18T09:35:16.083728159Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:35:16 default-k8s-diff-port-593480 crio[838]: time="2025-10-18T09:35:16.08457895Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:35:16 default-k8s-diff-port-593480 crio[838]: time="2025-10-18T09:35:16.113859611Z" level=info msg="Created container ce6c9aea4e932ceeabeebe3561a97d6948637c091c324f43118ccba63ae12a94: default/busybox/busybox" id=225bae4d-73c6-49d4-a4a5-5c3481e7c191 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:35:16 default-k8s-diff-port-593480 crio[838]: time="2025-10-18T09:35:16.117075824Z" level=info msg="Starting container: ce6c9aea4e932ceeabeebe3561a97d6948637c091c324f43118ccba63ae12a94" id=b6012962-f85f-473e-972a-9c6a2a321f84 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:35:16 default-k8s-diff-port-593480 crio[838]: time="2025-10-18T09:35:16.121694715Z" level=info msg="Started container" PID=1787 containerID=ce6c9aea4e932ceeabeebe3561a97d6948637c091c324f43118ccba63ae12a94 description=default/busybox/busybox id=b6012962-f85f-473e-972a-9c6a2a321f84 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d4bd26e7c6cab1b26a83147915b478b7c4650456253d4ce69144a4633a680662
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	ce6c9aea4e932       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago        Running             busybox                   0                   d4bd26e7c6cab       busybox                                                default
	f0073d4b3daf9       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago       Running             coredns                   0                   50f4a8e9fa9dd       coredns-66bc5c9577-lxwgf                               kube-system
	91f023eaf0e18       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago       Running             storage-provisioner       0                   78b45a71d5fc8       storage-provisioner                                    kube-system
	35755388098ff       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      54 seconds ago       Running             kindnet-cni               0                   e5ddb1422990d       kindnet-ptbw6                                          kube-system
	8f1347043c176       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      54 seconds ago       Running             kube-proxy                0                   03e14b7ca037a       kube-proxy-lz9p5                                       kube-system
	0374b199ad658       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   5c94672c06e4d       kube-controller-manager-default-k8s-diff-port-593480   kube-system
	ef1397c4700fd       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   4fa01a21928fb       etcd-default-k8s-diff-port-593480                      kube-system
	9a9d36929f70c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   4ccdd15e0f517       kube-scheduler-default-k8s-diff-port-593480            kube-system
	5d31981248f10       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   95c4d432b77fa       kube-apiserver-default-k8s-diff-port-593480            kube-system
	
	
	==> coredns [f0073d4b3daf90c747dac786e97a9c77433f38ade49e63a9c6492a208c3f2112] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53933 - 47583 "HINFO IN 3606640208472761954.8038254303882899699. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030190379s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-593480
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-593480
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820
	                    minikube.k8s.io/name=default-k8s-diff-port-593480
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_34_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:34:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-593480
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:35:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:35:24 +0000   Sat, 18 Oct 2025 09:34:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:35:24 +0000   Sat, 18 Oct 2025 09:34:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:35:24 +0000   Sat, 18 Oct 2025 09:34:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:35:24 +0000   Sat, 18 Oct 2025 09:35:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-593480
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                49945b4a-cdd7-400f-9239-4b91af7db42e
	  Boot ID:                    65523ab2-bf15-4ba4-9086-da57024e96a9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-lxwgf                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     56s
	  kube-system                 etcd-default-k8s-diff-port-593480                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         61s
	  kube-system                 kindnet-ptbw6                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      57s
	  kube-system                 kube-apiserver-default-k8s-diff-port-593480             250m (12%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-593480    200m (10%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-proxy-lz9p5                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 kube-scheduler-default-k8s-diff-port-593480             100m (5%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 54s                kube-proxy       
	  Warning  CgroupV1                 71s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  71s (x8 over 71s)  kubelet          Node default-k8s-diff-port-593480 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    71s (x8 over 71s)  kubelet          Node default-k8s-diff-port-593480 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     71s (x8 over 71s)  kubelet          Node default-k8s-diff-port-593480 status is now: NodeHasSufficientPID
	  Normal   Starting                 61s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s                kubelet          Node default-k8s-diff-port-593480 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s                kubelet          Node default-k8s-diff-port-593480 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s                kubelet          Node default-k8s-diff-port-593480 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s                node-controller  Node default-k8s-diff-port-593480 event: Registered Node default-k8s-diff-port-593480 in Controller
	  Normal   NodeReady                14s                kubelet          Node default-k8s-diff-port-593480 status is now: NodeReady
	
	
	==> dmesg <==
	[  +9.741593] overlayfs: idmapped layers are currently not supported
	[Oct18 09:14] overlayfs: idmapped layers are currently not supported
	[ +29.155522] overlayfs: idmapped layers are currently not supported
	[Oct18 09:15] overlayfs: idmapped layers are currently not supported
	[ +11.661984] kauditd_printk_skb: 8 callbacks suppressed
	[ +12.494912] overlayfs: idmapped layers are currently not supported
	[Oct18 09:16] overlayfs: idmapped layers are currently not supported
	[Oct18 09:18] overlayfs: idmapped layers are currently not supported
	[Oct18 09:20] overlayfs: idmapped layers are currently not supported
	[  +1.396393] overlayfs: idmapped layers are currently not supported
	[Oct18 09:22] overlayfs: idmapped layers are currently not supported
	[ +27.782946] overlayfs: idmapped layers are currently not supported
	[Oct18 09:25] overlayfs: idmapped layers are currently not supported
	[Oct18 09:26] overlayfs: idmapped layers are currently not supported
	[Oct18 09:27] overlayfs: idmapped layers are currently not supported
	[Oct18 09:28] overlayfs: idmapped layers are currently not supported
	[ +34.309418] overlayfs: idmapped layers are currently not supported
	[Oct18 09:30] overlayfs: idmapped layers are currently not supported
	[Oct18 09:31] overlayfs: idmapped layers are currently not supported
	[ +30.479187] overlayfs: idmapped layers are currently not supported
	[Oct18 09:32] overlayfs: idmapped layers are currently not supported
	[Oct18 09:33] overlayfs: idmapped layers are currently not supported
	[Oct18 09:34] overlayfs: idmapped layers are currently not supported
	[ +34.458375] overlayfs: idmapped layers are currently not supported
	[Oct18 09:35] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ef1397c4700fd01f1cf021e44da2cf5409439d28b6fe4cb2364bce4208ba50c2] <==
	{"level":"warn","ts":"2025-10-18T09:34:17.296041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:34:17.370744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:34:17.415774Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:34:17.473649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:34:17.516321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:34:17.551226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:34:17.597906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:34:17.660096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:34:17.661733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:34:17.702989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:34:17.750661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:34:17.829547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:34:17.857455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:34:17.882383Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:34:17.907936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:34:17.971978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:34:18.015052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:34:18.089056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:34:18.135385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:34:18.190887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:34:18.279468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:34:18.313106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:34:18.341067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:34:18.427659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:34:18.593295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40926","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:35:24 up 11:17,  0 user,  load average: 2.48, 3.05, 2.68
	Linux default-k8s-diff-port-593480 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [35755388098ffa389180f38dc639fe94b4895cd43bd4af5fa9bd94d281d215d8] <==
	I1018 09:34:30.415950       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:34:30.416192       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1018 09:34:30.416341       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:34:30.416358       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:34:30.416368       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:34:30Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:34:30.615090       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:34:30.615162       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:34:30.615194       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:34:30.708425       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 09:35:00.615300       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 09:35:00.615572       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1018 09:35:00.616250       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 09:35:00.616250       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1018 09:35:02.115829       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:35:02.115950       1 metrics.go:72] Registering metrics
	I1018 09:35:02.116058       1 controller.go:711] "Syncing nftables rules"
	I1018 09:35:10.622182       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 09:35:10.622220       1 main.go:301] handling current node
	I1018 09:35:20.617416       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 09:35:20.617478       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5d31981248f102e40c67d4b3686d3713544733b607c978881b5d98b51bbc2a4d] <==
	I1018 09:34:20.019260       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 09:34:20.140837       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:34:20.213914       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 09:34:20.228514       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 09:34:20.240464       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:34:20.314931       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:34:20.315033       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 09:34:20.589658       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 09:34:20.605520       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 09:34:20.605671       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:34:21.740835       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:34:21.798612       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:34:21.908399       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 09:34:21.916026       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1018 09:34:21.917214       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 09:34:21.925518       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 09:34:22.128379       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 09:34:22.905988       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 09:34:22.940585       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 09:34:22.961837       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 09:34:27.263333       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:34:27.275360       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:34:27.902211       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1018 09:34:28.305399       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1018 09:35:22.773007       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:45980: use of closed network connection
	
	
	==> kube-controller-manager [0374b199ad658a94ef889cbae394fde6ae58860106ba81100b5a36bd447cf7bf] <==
	I1018 09:34:27.165936       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 09:34:27.167167       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 09:34:27.168331       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 09:34:27.168383       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:34:27.169492       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 09:34:27.171637       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:34:27.171652       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 09:34:27.171662       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 09:34:27.172707       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 09:34:27.172712       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 09:34:27.175895       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 09:34:27.178510       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 09:34:27.178581       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 09:34:27.179815       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 09:34:27.182104       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 09:34:27.184384       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 09:34:27.184390       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 09:34:27.184549       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 09:34:27.184584       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 09:34:27.184591       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 09:34:27.184597       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 09:34:27.190589       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 09:34:27.210019       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 09:34:27.232121       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-593480" podCIDRs=["10.244.0.0/24"]
	I1018 09:35:12.130008       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [8f1347043c17638ad54da213ac075f9ba69f9faa2065e592f1001146a2dd6803] <==
	I1018 09:34:30.320312       1 server_linux.go:53] "Using iptables proxy"
	I1018 09:34:30.418641       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 09:34:30.519153       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:34:30.519192       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1018 09:34:30.519282       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:34:30.538131       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:34:30.538195       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:34:30.542873       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:34:30.543441       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:34:30.543465       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:34:30.545540       1 config.go:200] "Starting service config controller"
	I1018 09:34:30.545560       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:34:30.545578       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:34:30.545582       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:34:30.545597       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:34:30.545601       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:34:30.546280       1 config.go:309] "Starting node config controller"
	I1018 09:34:30.546293       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:34:30.546299       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 09:34:30.645736       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 09:34:30.645723       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 09:34:30.645766       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9a9d36929f70c7e58e8e28b7dcf1e28121a38c5b4a83fe7a9224e50245fcebbe] <==
	I1018 09:34:20.459731       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1018 09:34:20.466860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1018 09:34:20.467932       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 09:34:20.468073       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1018 09:34:20.499122       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 09:34:20.499277       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 09:34:20.499470       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 09:34:20.504203       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 09:34:20.504385       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 09:34:20.504491       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 09:34:20.504841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 09:34:20.506936       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 09:34:20.507072       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 09:34:20.507132       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 09:34:20.507221       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 09:34:20.507238       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 09:34:20.507253       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 09:34:20.507639       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 09:34:20.513112       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 09:34:20.513339       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 09:34:20.513619       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 09:34:20.513869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 09:34:21.325871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 09:34:21.364265       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1018 09:34:22.160033       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:34:28 default-k8s-diff-port-593480 kubelet[1305]: I1018 09:34:28.304844    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t29mw\" (UniqueName: \"kubernetes.io/projected/5fa3779f-2d5f-4303-8f6b-af5ae96f1fae-kube-api-access-t29mw\") pod \"kindnet-ptbw6\" (UID: \"5fa3779f-2d5f-4303-8f6b-af5ae96f1fae\") " pod="kube-system/kindnet-ptbw6"
	Oct 18 09:34:28 default-k8s-diff-port-593480 kubelet[1305]: I1018 09:34:28.304878    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5fa3779f-2d5f-4303-8f6b-af5ae96f1fae-cni-cfg\") pod \"kindnet-ptbw6\" (UID: \"5fa3779f-2d5f-4303-8f6b-af5ae96f1fae\") " pod="kube-system/kindnet-ptbw6"
	Oct 18 09:34:28 default-k8s-diff-port-593480 kubelet[1305]: I1018 09:34:28.304918    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5fa3779f-2d5f-4303-8f6b-af5ae96f1fae-lib-modules\") pod \"kindnet-ptbw6\" (UID: \"5fa3779f-2d5f-4303-8f6b-af5ae96f1fae\") " pod="kube-system/kindnet-ptbw6"
	Oct 18 09:34:29 default-k8s-diff-port-593480 kubelet[1305]: E1018 09:34:29.307303    1305 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Oct 18 09:34:29 default-k8s-diff-port-593480 kubelet[1305]: E1018 09:34:29.307401    1305 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/df6ea9c5-3f27-4e58-be1b-c6f47b71aa63-kube-proxy podName:df6ea9c5-3f27-4e58-be1b-c6f47b71aa63 nodeName:}" failed. No retries permitted until 2025-10-18 09:34:29.807376328 +0000 UTC m=+6.949826125 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/df6ea9c5-3f27-4e58-be1b-c6f47b71aa63-kube-proxy") pod "kube-proxy-lz9p5" (UID: "df6ea9c5-3f27-4e58-be1b-c6f47b71aa63") : failed to sync configmap cache: timed out waiting for the condition
	Oct 18 09:34:29 default-k8s-diff-port-593480 kubelet[1305]: E1018 09:34:29.447094    1305 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 18 09:34:29 default-k8s-diff-port-593480 kubelet[1305]: E1018 09:34:29.447145    1305 projected.go:196] Error preparing data for projected volume kube-api-access-nz97n for pod kube-system/kube-proxy-lz9p5: failed to sync configmap cache: timed out waiting for the condition
	Oct 18 09:34:29 default-k8s-diff-port-593480 kubelet[1305]: E1018 09:34:29.447243    1305 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/df6ea9c5-3f27-4e58-be1b-c6f47b71aa63-kube-api-access-nz97n podName:df6ea9c5-3f27-4e58-be1b-c6f47b71aa63 nodeName:}" failed. No retries permitted until 2025-10-18 09:34:29.947218265 +0000 UTC m=+7.089668062 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nz97n" (UniqueName: "kubernetes.io/projected/df6ea9c5-3f27-4e58-be1b-c6f47b71aa63-kube-api-access-nz97n") pod "kube-proxy-lz9p5" (UID: "df6ea9c5-3f27-4e58-be1b-c6f47b71aa63") : failed to sync configmap cache: timed out waiting for the condition
	Oct 18 09:34:29 default-k8s-diff-port-593480 kubelet[1305]: E1018 09:34:29.519619    1305 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 18 09:34:29 default-k8s-diff-port-593480 kubelet[1305]: E1018 09:34:29.519681    1305 projected.go:196] Error preparing data for projected volume kube-api-access-t29mw for pod kube-system/kindnet-ptbw6: failed to sync configmap cache: timed out waiting for the condition
	Oct 18 09:34:29 default-k8s-diff-port-593480 kubelet[1305]: E1018 09:34:29.519773    1305 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5fa3779f-2d5f-4303-8f6b-af5ae96f1fae-kube-api-access-t29mw podName:5fa3779f-2d5f-4303-8f6b-af5ae96f1fae nodeName:}" failed. No retries permitted until 2025-10-18 09:34:30.019750463 +0000 UTC m=+7.162200260 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-t29mw" (UniqueName: "kubernetes.io/projected/5fa3779f-2d5f-4303-8f6b-af5ae96f1fae-kube-api-access-t29mw") pod "kindnet-ptbw6" (UID: "5fa3779f-2d5f-4303-8f6b-af5ae96f1fae") : failed to sync configmap cache: timed out waiting for the condition
	Oct 18 09:34:30 default-k8s-diff-port-593480 kubelet[1305]: I1018 09:34:30.041431    1305 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 18 09:34:30 default-k8s-diff-port-593480 kubelet[1305]: W1018 09:34:30.350162    1305 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/bfa509b1b05342b217b1ce963a2db530ef024766b92757db9c1cb5a7153e9679/crio-e5ddb1422990d85a9d65bbb2c8799d8df271e9c0e1c71d6eb93e740249e66374 WatchSource:0}: Error finding container e5ddb1422990d85a9d65bbb2c8799d8df271e9c0e1c71d6eb93e740249e66374: Status 404 returned error can't find the container with id e5ddb1422990d85a9d65bbb2c8799d8df271e9c0e1c71d6eb93e740249e66374
	Oct 18 09:34:31 default-k8s-diff-port-593480 kubelet[1305]: I1018 09:34:31.318924    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lz9p5" podStartSLOduration=4.318903136 podStartE2EDuration="4.318903136s" podCreationTimestamp="2025-10-18 09:34:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:34:30.314779178 +0000 UTC m=+7.457228983" watchObservedRunningTime="2025-10-18 09:34:31.318903136 +0000 UTC m=+8.461352941"
	Oct 18 09:34:32 default-k8s-diff-port-593480 kubelet[1305]: I1018 09:34:32.080105    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-ptbw6" podStartSLOduration=5.080085236 podStartE2EDuration="5.080085236s" podCreationTimestamp="2025-10-18 09:34:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:34:31.320561805 +0000 UTC m=+8.463011610" watchObservedRunningTime="2025-10-18 09:34:32.080085236 +0000 UTC m=+9.222535041"
	Oct 18 09:35:10 default-k8s-diff-port-593480 kubelet[1305]: I1018 09:35:10.716406    1305 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 18 09:35:10 default-k8s-diff-port-593480 kubelet[1305]: I1018 09:35:10.833713    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/da9f578c-74b8-40c2-a810-245c70e07eae-tmp\") pod \"storage-provisioner\" (UID: \"da9f578c-74b8-40c2-a810-245c70e07eae\") " pod="kube-system/storage-provisioner"
	Oct 18 09:35:10 default-k8s-diff-port-593480 kubelet[1305]: I1018 09:35:10.833767    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrcnj\" (UniqueName: \"kubernetes.io/projected/da9f578c-74b8-40c2-a810-245c70e07eae-kube-api-access-jrcnj\") pod \"storage-provisioner\" (UID: \"da9f578c-74b8-40c2-a810-245c70e07eae\") " pod="kube-system/storage-provisioner"
	Oct 18 09:35:10 default-k8s-diff-port-593480 kubelet[1305]: I1018 09:35:10.833800    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7dfe7cc5-827f-4a29-932a-943c05bc729e-config-volume\") pod \"coredns-66bc5c9577-lxwgf\" (UID: \"7dfe7cc5-827f-4a29-932a-943c05bc729e\") " pod="kube-system/coredns-66bc5c9577-lxwgf"
	Oct 18 09:35:10 default-k8s-diff-port-593480 kubelet[1305]: I1018 09:35:10.833819    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x76tx\" (UniqueName: \"kubernetes.io/projected/7dfe7cc5-827f-4a29-932a-943c05bc729e-kube-api-access-x76tx\") pod \"coredns-66bc5c9577-lxwgf\" (UID: \"7dfe7cc5-827f-4a29-932a-943c05bc729e\") " pod="kube-system/coredns-66bc5c9577-lxwgf"
	Oct 18 09:35:11 default-k8s-diff-port-593480 kubelet[1305]: W1018 09:35:11.138348    1305 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/bfa509b1b05342b217b1ce963a2db530ef024766b92757db9c1cb5a7153e9679/crio-50f4a8e9fa9dd5a49744bd0742a79b765f53037e792e1fb59dfd6ec66ba095cf WatchSource:0}: Error finding container 50f4a8e9fa9dd5a49744bd0742a79b765f53037e792e1fb59dfd6ec66ba095cf: Status 404 returned error can't find the container with id 50f4a8e9fa9dd5a49744bd0742a79b765f53037e792e1fb59dfd6ec66ba095cf
	Oct 18 09:35:11 default-k8s-diff-port-593480 kubelet[1305]: I1018 09:35:11.469005    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-lxwgf" podStartSLOduration=43.468985848 podStartE2EDuration="43.468985848s" podCreationTimestamp="2025-10-18 09:34:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:35:11.433124168 +0000 UTC m=+48.575573998" watchObservedRunningTime="2025-10-18 09:35:11.468985848 +0000 UTC m=+48.611435653"
	Oct 18 09:35:13 default-k8s-diff-port-593480 kubelet[1305]: I1018 09:35:13.607551    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=44.607531585 podStartE2EDuration="44.607531585s" podCreationTimestamp="2025-10-18 09:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:35:11.496897431 +0000 UTC m=+48.639347244" watchObservedRunningTime="2025-10-18 09:35:13.607531585 +0000 UTC m=+50.749981390"
	Oct 18 09:35:13 default-k8s-diff-port-593480 kubelet[1305]: I1018 09:35:13.656969    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6gp4\" (UniqueName: \"kubernetes.io/projected/b16ad816-3da6-4828-b35a-f8c0f32a7093-kube-api-access-b6gp4\") pod \"busybox\" (UID: \"b16ad816-3da6-4828-b35a-f8c0f32a7093\") " pod="default/busybox"
	Oct 18 09:35:13 default-k8s-diff-port-593480 kubelet[1305]: W1018 09:35:13.951111    1305 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/bfa509b1b05342b217b1ce963a2db530ef024766b92757db9c1cb5a7153e9679/crio-d4bd26e7c6cab1b26a83147915b478b7c4650456253d4ce69144a4633a680662 WatchSource:0}: Error finding container d4bd26e7c6cab1b26a83147915b478b7c4650456253d4ce69144a4633a680662: Status 404 returned error can't find the container with id d4bd26e7c6cab1b26a83147915b478b7c4650456253d4ce69144a4633a680662
	
	
	==> storage-provisioner [91f023eaf0e183d5f184809fa038d17a249547a2c38f773f2a12de7abfed33b6] <==
	I1018 09:35:11.156411       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 09:35:11.175651       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 09:35:11.175742       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 09:35:11.193542       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:35:11.239723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:35:11.240023       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 09:35:11.240239       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-593480_6fc938f2-ba94-407b-8b26-7afdfce84eba!
	I1018 09:35:11.241237       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"75bb76a9-c543-40fa-ba6e-108e81012c94", APIVersion:"v1", ResourceVersion:"462", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-593480_6fc938f2-ba94-407b-8b26-7afdfce84eba became leader
	W1018 09:35:11.249508       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:35:11.252907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:35:11.341113       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-593480_6fc938f2-ba94-407b-8b26-7afdfce84eba!
	W1018 09:35:13.275183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:35:13.281012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:35:15.284548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:35:15.289841       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:35:17.294771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:35:17.304547       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:35:19.320662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:35:19.339965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:35:21.343628       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:35:21.348392       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:35:23.356146       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:35:23.378300       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:35:25.381919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:35:25.391532       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-593480 -n default-k8s-diff-port-593480
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-593480 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-250274 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-250274 --alsologtostderr -v=1: exit status 80 (2.019024224s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-250274 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:35:26.438128 1484062 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:35:26.438337 1484062 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:35:26.438367 1484062 out.go:374] Setting ErrFile to fd 2...
	I1018 09:35:26.438398 1484062 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:35:26.441151 1484062 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 09:35:26.444232 1484062 out.go:368] Setting JSON to false
	I1018 09:35:26.444319 1484062 mustload.go:65] Loading cluster: newest-cni-250274
	I1018 09:35:26.444766 1484062 config.go:182] Loaded profile config "newest-cni-250274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:35:26.447653 1484062 cli_runner.go:164] Run: docker container inspect newest-cni-250274 --format={{.State.Status}}
	I1018 09:35:26.486943 1484062 host.go:66] Checking if "newest-cni-250274" exists ...
	I1018 09:35:26.487250 1484062 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:35:26.586788 1484062 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:true NGoroutines:68 SystemTime:2025-10-18 09:35:26.573326232 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:35:26.587454 1484062 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-250274 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1018 09:35:26.590581 1484062 out.go:179] * Pausing node newest-cni-250274 ... 
	I1018 09:35:26.594311 1484062 host.go:66] Checking if "newest-cni-250274" exists ...
	I1018 09:35:26.594642 1484062 ssh_runner.go:195] Run: systemctl --version
	I1018 09:35:26.594684 1484062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250274
	I1018 09:35:26.617058 1484062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34911 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/newest-cni-250274/id_rsa Username:docker}
	I1018 09:35:26.731469 1484062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:35:26.746522 1484062 pause.go:52] kubelet running: true
	I1018 09:35:26.746584 1484062 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:35:27.069354 1484062 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:35:27.069452 1484062 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:35:27.146875 1484062 cri.go:89] found id: "3439d88200c25fc67750117cfd2823cb088f5e44b59989c6f913d4654ded8a9e"
	I1018 09:35:27.146898 1484062 cri.go:89] found id: "21ff0b5d40f5dcd9e0ee439fe9b4b161d060e1a61266bc62955a345f5f20b482"
	I1018 09:35:27.146904 1484062 cri.go:89] found id: "838ef5430e58bb4a609136dfa74910535190f395496c2bd21432db44c19aaff4"
	I1018 09:35:27.146907 1484062 cri.go:89] found id: "52152d05aeb48008c167a0cc9d9f80e34c5ab6124747ccfbbf79ba25a61db69f"
	I1018 09:35:27.146911 1484062 cri.go:89] found id: "66052a766abf5dba4b7c9118f1e1e91be861206c216d0a3766c7fcebd6504824"
	I1018 09:35:27.146915 1484062 cri.go:89] found id: "89f5e6f41611e1935f1802e4ae146f223304dda14ce071d5b606ea7ceb35d965"
	I1018 09:35:27.146918 1484062 cri.go:89] found id: ""
	I1018 09:35:27.146987 1484062 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:35:27.157815 1484062 retry.go:31] will retry after 160.00867ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:35:27Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:35:27.318152 1484062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:35:27.331147 1484062 pause.go:52] kubelet running: false
	I1018 09:35:27.331229 1484062 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:35:27.481985 1484062 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:35:27.482085 1484062 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:35:27.555780 1484062 cri.go:89] found id: "3439d88200c25fc67750117cfd2823cb088f5e44b59989c6f913d4654ded8a9e"
	I1018 09:35:27.555803 1484062 cri.go:89] found id: "21ff0b5d40f5dcd9e0ee439fe9b4b161d060e1a61266bc62955a345f5f20b482"
	I1018 09:35:27.555809 1484062 cri.go:89] found id: "838ef5430e58bb4a609136dfa74910535190f395496c2bd21432db44c19aaff4"
	I1018 09:35:27.555818 1484062 cri.go:89] found id: "52152d05aeb48008c167a0cc9d9f80e34c5ab6124747ccfbbf79ba25a61db69f"
	I1018 09:35:27.555823 1484062 cri.go:89] found id: "66052a766abf5dba4b7c9118f1e1e91be861206c216d0a3766c7fcebd6504824"
	I1018 09:35:27.555826 1484062 cri.go:89] found id: "89f5e6f41611e1935f1802e4ae146f223304dda14ce071d5b606ea7ceb35d965"
	I1018 09:35:27.555829 1484062 cri.go:89] found id: ""
	I1018 09:35:27.555906 1484062 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:35:27.567342 1484062 retry.go:31] will retry after 517.205522ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:35:27Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:35:28.084776 1484062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:35:28.098902 1484062 pause.go:52] kubelet running: false
	I1018 09:35:28.098982 1484062 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:35:28.240678 1484062 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:35:28.240755 1484062 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:35:28.307130 1484062 cri.go:89] found id: "3439d88200c25fc67750117cfd2823cb088f5e44b59989c6f913d4654ded8a9e"
	I1018 09:35:28.307152 1484062 cri.go:89] found id: "21ff0b5d40f5dcd9e0ee439fe9b4b161d060e1a61266bc62955a345f5f20b482"
	I1018 09:35:28.307156 1484062 cri.go:89] found id: "838ef5430e58bb4a609136dfa74910535190f395496c2bd21432db44c19aaff4"
	I1018 09:35:28.307160 1484062 cri.go:89] found id: "52152d05aeb48008c167a0cc9d9f80e34c5ab6124747ccfbbf79ba25a61db69f"
	I1018 09:35:28.307164 1484062 cri.go:89] found id: "66052a766abf5dba4b7c9118f1e1e91be861206c216d0a3766c7fcebd6504824"
	I1018 09:35:28.307167 1484062 cri.go:89] found id: "89f5e6f41611e1935f1802e4ae146f223304dda14ce071d5b606ea7ceb35d965"
	I1018 09:35:28.307170 1484062 cri.go:89] found id: ""
	I1018 09:35:28.307225 1484062 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:35:28.321717 1484062 out.go:203] 
	W1018 09:35:28.324759 1484062 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:35:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:35:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:35:28.324824 1484062 out.go:285] * 
	* 
	W1018 09:35:28.334391 1484062 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:35:28.337370 1484062 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-250274 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-250274
helpers_test.go:243: (dbg) docker inspect newest-cni-250274:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3f010420231a13bf1bf2f85b36b5b332c4bb0a0624b2e0eaeb9d03bc23b53aa4",
	        "Created": "2025-10-18T09:34:27.497200504Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1481864,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:35:07.723471637Z",
	            "FinishedAt": "2025-10-18T09:35:06.806103641Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/3f010420231a13bf1bf2f85b36b5b332c4bb0a0624b2e0eaeb9d03bc23b53aa4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3f010420231a13bf1bf2f85b36b5b332c4bb0a0624b2e0eaeb9d03bc23b53aa4/hostname",
	        "HostsPath": "/var/lib/docker/containers/3f010420231a13bf1bf2f85b36b5b332c4bb0a0624b2e0eaeb9d03bc23b53aa4/hosts",
	        "LogPath": "/var/lib/docker/containers/3f010420231a13bf1bf2f85b36b5b332c4bb0a0624b2e0eaeb9d03bc23b53aa4/3f010420231a13bf1bf2f85b36b5b332c4bb0a0624b2e0eaeb9d03bc23b53aa4-json.log",
	        "Name": "/newest-cni-250274",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-250274:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-250274",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3f010420231a13bf1bf2f85b36b5b332c4bb0a0624b2e0eaeb9d03bc23b53aa4",
	                "LowerDir": "/var/lib/docker/overlay2/2fe5ff73b8ff26832abe48ec6c179e857bbcb47b7c7fd14f5b5c2bf3f5188935-init/diff:/var/lib/docker/overlay2/60519750c2737db0dfdb37adf468f6414129c2b5dcb7218ea59afbc63bfd537f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2fe5ff73b8ff26832abe48ec6c179e857bbcb47b7c7fd14f5b5c2bf3f5188935/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2fe5ff73b8ff26832abe48ec6c179e857bbcb47b7c7fd14f5b5c2bf3f5188935/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2fe5ff73b8ff26832abe48ec6c179e857bbcb47b7c7fd14f5b5c2bf3f5188935/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-250274",
	                "Source": "/var/lib/docker/volumes/newest-cni-250274/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-250274",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-250274",
	                "name.minikube.sigs.k8s.io": "newest-cni-250274",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2d0e3f942ec6d0500df0555a39cafd7f6b651e02426eecaa379c65e463f60402",
	            "SandboxKey": "/var/run/docker/netns/2d0e3f942ec6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34911"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34912"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34915"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34913"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34914"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-250274": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:fd:88:80:20:0f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "804e7137416d690774484eb7cc39c343cbbb64651a610611c9ac627077f5c75f",
	                    "EndpointID": "84a9e9e6e35125719106e5df451747e12db104567d42132baec6161ef45fe7c0",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-250274",
	                        "3f010420231a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-250274 -n newest-cni-250274
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-250274 -n newest-cni-250274: exit status 2 (338.263448ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-250274 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-250274 logs -n 25: (1.06549568s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p no-preload-886951 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │ 18 Oct 25 09:32 UTC │
	│ start   │ -p no-preload-886951 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │ 18 Oct 25 09:33 UTC │
	│ addons  │ enable metrics-server -p embed-certs-559379 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │                     │
	│ stop    │ -p embed-certs-559379 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │ 18 Oct 25 09:33 UTC │
	│ addons  │ enable dashboard -p embed-certs-559379 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:33 UTC │
	│ start   │ -p embed-certs-559379 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:33 UTC │
	│ image   │ no-preload-886951 image list --format=json                                                                                                                                                                                                    │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:33 UTC │
	│ pause   │ -p no-preload-886951 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │                     │
	│ delete  │ -p no-preload-886951                                                                                                                                                                                                                          │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:33 UTC │
	│ delete  │ -p no-preload-886951                                                                                                                                                                                                                          │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:33 UTC │
	│ delete  │ -p disable-driver-mounts-877810                                                                                                                                                                                                               │ disable-driver-mounts-877810 │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:33 UTC │
	│ start   │ -p default-k8s-diff-port-593480 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-593480 │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:35 UTC │
	│ image   │ embed-certs-559379 image list --format=json                                                                                                                                                                                                   │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │ 18 Oct 25 09:34 UTC │
	│ pause   │ -p embed-certs-559379 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │                     │
	│ delete  │ -p embed-certs-559379                                                                                                                                                                                                                         │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │ 18 Oct 25 09:34 UTC │
	│ delete  │ -p embed-certs-559379                                                                                                                                                                                                                         │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │ 18 Oct 25 09:34 UTC │
	│ start   │ -p newest-cni-250274 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-250274            │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │ 18 Oct 25 09:35 UTC │
	│ addons  │ enable metrics-server -p newest-cni-250274 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-250274            │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │                     │
	│ stop    │ -p newest-cni-250274 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-250274            │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │ 18 Oct 25 09:35 UTC │
	│ addons  │ enable dashboard -p newest-cni-250274 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-250274            │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │ 18 Oct 25 09:35 UTC │
	│ start   │ -p newest-cni-250274 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-250274            │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │ 18 Oct 25 09:35 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-593480 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-593480 │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │                     │
	│ image   │ newest-cni-250274 image list --format=json                                                                                                                                                                                                    │ newest-cni-250274            │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │ 18 Oct 25 09:35 UTC │
	│ stop    │ -p default-k8s-diff-port-593480 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-593480 │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │                     │
	│ pause   │ -p newest-cni-250274 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-250274            │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:35:07
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:35:07.451459 1481740 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:35:07.451656 1481740 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:35:07.451667 1481740 out.go:374] Setting ErrFile to fd 2...
	I1018 09:35:07.451672 1481740 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:35:07.452023 1481740 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 09:35:07.452457 1481740 out.go:368] Setting JSON to false
	I1018 09:35:07.453521 1481740 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":40655,"bootTime":1760739453,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1018 09:35:07.453595 1481740 start.go:141] virtualization:  
	I1018 09:35:07.456720 1481740 out.go:179] * [newest-cni-250274] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 09:35:07.460746 1481740 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 09:35:07.460877 1481740 notify.go:220] Checking for updates...
	I1018 09:35:07.467201 1481740 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:35:07.470266 1481740 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 09:35:07.473319 1481740 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-1274243/.minikube
	I1018 09:35:07.476459 1481740 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 09:35:07.479356 1481740 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:35:07.482864 1481740 config.go:182] Loaded profile config "newest-cni-250274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:35:07.483467 1481740 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:35:07.517721 1481740 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 09:35:07.517837 1481740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:35:07.573757 1481740 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 09:35:07.564612032 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:35:07.573865 1481740 docker.go:318] overlay module found
	I1018 09:35:07.579110 1481740 out.go:179] * Using the docker driver based on existing profile
	I1018 09:35:07.581870 1481740 start.go:305] selected driver: docker
	I1018 09:35:07.581888 1481740 start.go:925] validating driver "docker" against &{Name:newest-cni-250274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-250274 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:35:07.581991 1481740 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:35:07.582704 1481740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:35:07.639922 1481740 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 09:35:07.63089103 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:35:07.640270 1481740 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 09:35:07.640306 1481740 cni.go:84] Creating CNI manager for ""
	I1018 09:35:07.640366 1481740 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:35:07.640410 1481740 start.go:349] cluster config:
	{Name:newest-cni-250274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-250274 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:35:07.643508 1481740 out.go:179] * Starting "newest-cni-250274" primary control-plane node in "newest-cni-250274" cluster
	I1018 09:35:07.646302 1481740 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:35:07.649050 1481740 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:35:07.651891 1481740 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:35:07.651979 1481740 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 09:35:07.651995 1481740 cache.go:58] Caching tarball of preloaded images
	I1018 09:35:07.652061 1481740 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:35:07.652302 1481740 preload.go:233] Found /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 09:35:07.652313 1481740 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 09:35:07.652431 1481740 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/config.json ...
	I1018 09:35:07.672327 1481740 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:35:07.672353 1481740 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:35:07.672373 1481740 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:35:07.672397 1481740 start.go:360] acquireMachinesLock for newest-cni-250274: {Name:mk472d1fdef0a7773f022c5286349dcbff699ada Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:35:07.672472 1481740 start.go:364] duration metric: took 48.179µs to acquireMachinesLock for "newest-cni-250274"
	I1018 09:35:07.672495 1481740 start.go:96] Skipping create...Using existing machine configuration
	I1018 09:35:07.672506 1481740 fix.go:54] fixHost starting: 
	I1018 09:35:07.672769 1481740 cli_runner.go:164] Run: docker container inspect newest-cni-250274 --format={{.State.Status}}
	I1018 09:35:07.689184 1481740 fix.go:112] recreateIfNeeded on newest-cni-250274: state=Stopped err=<nil>
	W1018 09:35:07.689214 1481740 fix.go:138] unexpected machine state, will restart: <nil>
	W1018 09:35:04.429700 1474687 node_ready.go:57] node "default-k8s-diff-port-593480" has "Ready":"False" status (will retry)
	W1018 09:35:06.430055 1474687 node_ready.go:57] node "default-k8s-diff-port-593480" has "Ready":"False" status (will retry)
	I1018 09:35:07.692361 1481740 out.go:252] * Restarting existing docker container for "newest-cni-250274" ...
	I1018 09:35:07.692442 1481740 cli_runner.go:164] Run: docker start newest-cni-250274
	I1018 09:35:07.935274 1481740 cli_runner.go:164] Run: docker container inspect newest-cni-250274 --format={{.State.Status}}
	I1018 09:35:07.957316 1481740 kic.go:430] container "newest-cni-250274" state is running.
	I1018 09:35:07.957748 1481740 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-250274
	I1018 09:35:07.979159 1481740 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/config.json ...
	I1018 09:35:07.979391 1481740 machine.go:93] provisionDockerMachine start ...
	I1018 09:35:07.979451 1481740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250274
	I1018 09:35:08.003355 1481740 main.go:141] libmachine: Using SSH client type: native
	I1018 09:35:08.003689 1481740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34911 <nil> <nil>}
	I1018 09:35:08.003699 1481740 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:35:08.004657 1481740 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 09:35:11.179820 1481740 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-250274
	
	I1018 09:35:11.179940 1481740 ubuntu.go:182] provisioning hostname "newest-cni-250274"
	I1018 09:35:11.180047 1481740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250274
	I1018 09:35:11.206494 1481740 main.go:141] libmachine: Using SSH client type: native
	I1018 09:35:11.206893 1481740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34911 <nil> <nil>}
	I1018 09:35:11.206920 1481740 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-250274 && echo "newest-cni-250274" | sudo tee /etc/hostname
	I1018 09:35:11.382677 1481740 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-250274
	
	I1018 09:35:11.382843 1481740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250274
	I1018 09:35:11.410095 1481740 main.go:141] libmachine: Using SSH client type: native
	I1018 09:35:11.410409 1481740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34911 <nil> <nil>}
	I1018 09:35:11.410427 1481740 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-250274' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-250274/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-250274' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:35:11.576823 1481740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:35:11.576848 1481740 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-1274243/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-1274243/.minikube}
	I1018 09:35:11.576870 1481740 ubuntu.go:190] setting up certificates
	I1018 09:35:11.576879 1481740 provision.go:84] configureAuth start
	I1018 09:35:11.576951 1481740 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-250274
	I1018 09:35:11.595769 1481740 provision.go:143] copyHostCerts
	I1018 09:35:11.595828 1481740 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem, removing ...
	I1018 09:35:11.596013 1481740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem
	I1018 09:35:11.596107 1481740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem (1078 bytes)
	I1018 09:35:11.596223 1481740 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem, removing ...
	I1018 09:35:11.596229 1481740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem
	I1018 09:35:11.596257 1481740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem (1123 bytes)
	I1018 09:35:11.596318 1481740 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem, removing ...
	I1018 09:35:11.596323 1481740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem
	I1018 09:35:11.596346 1481740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem (1675 bytes)
	I1018 09:35:11.596401 1481740 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem org=jenkins.newest-cni-250274 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-250274]
	I1018 09:35:12.355708 1481740 provision.go:177] copyRemoteCerts
	I1018 09:35:12.355779 1481740 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:35:12.355831 1481740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250274
	I1018 09:35:12.375529 1481740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34911 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/newest-cni-250274/id_rsa Username:docker}
	W1018 09:35:08.929925 1474687 node_ready.go:57] node "default-k8s-diff-port-593480" has "Ready":"False" status (will retry)
	I1018 09:35:10.929268 1474687 node_ready.go:49] node "default-k8s-diff-port-593480" is "Ready"
	I1018 09:35:10.929306 1474687 node_ready.go:38] duration metric: took 41.002800702s for node "default-k8s-diff-port-593480" to be "Ready" ...
	I1018 09:35:10.929321 1474687 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:35:10.929387 1474687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:35:10.943201 1474687 api_server.go:72] duration metric: took 42.289449947s to wait for apiserver process to appear ...
	I1018 09:35:10.943224 1474687 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:35:10.943243 1474687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1018 09:35:10.963991 1474687 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1018 09:35:10.965001 1474687 api_server.go:141] control plane version: v1.34.1
	I1018 09:35:10.965026 1474687 api_server.go:131] duration metric: took 21.794732ms to wait for apiserver health ...
	I1018 09:35:10.965035 1474687 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:35:10.968142 1474687 system_pods.go:59] 8 kube-system pods found
	I1018 09:35:10.968179 1474687 system_pods.go:61] "coredns-66bc5c9577-lxwgf" [7dfe7cc5-827f-4a29-932a-943c05bc729e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:35:10.968187 1474687 system_pods.go:61] "etcd-default-k8s-diff-port-593480" [9c3866d1-8a94-420d-ac51-55bcf2f955e1] Running
	I1018 09:35:10.968193 1474687 system_pods.go:61] "kindnet-ptbw6" [5fa3779f-2d5f-4303-8f6b-af5ae96f1fae] Running
	I1018 09:35:10.968198 1474687 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-593480" [35443851-03a6-43b4-b827-6dcd89b14052] Running
	I1018 09:35:10.968204 1474687 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-593480" [9e1e3689-cdbf-48cf-8c1e-4a55e905811d] Running
	I1018 09:35:10.968210 1474687 system_pods.go:61] "kube-proxy-lz9p5" [df6ea9c5-3f27-4e58-be1b-c6f47b71aa63] Running
	I1018 09:35:10.968221 1474687 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-593480" [ab2397f7-d4be-4bc7-98eb-b9ddb0e6a9a2] Running
	I1018 09:35:10.968237 1474687 system_pods.go:61] "storage-provisioner" [da9f578c-74b8-40c2-a810-245c70e07eae] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:35:10.968256 1474687 system_pods.go:74] duration metric: took 3.214188ms to wait for pod list to return data ...
	I1018 09:35:10.968265 1474687 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:35:10.970910 1474687 default_sa.go:45] found service account: "default"
	I1018 09:35:10.970940 1474687 default_sa.go:55] duration metric: took 2.66185ms for default service account to be created ...
	I1018 09:35:10.970949 1474687 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 09:35:10.973952 1474687 system_pods.go:86] 8 kube-system pods found
	I1018 09:35:10.973988 1474687 system_pods.go:89] "coredns-66bc5c9577-lxwgf" [7dfe7cc5-827f-4a29-932a-943c05bc729e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:35:10.973995 1474687 system_pods.go:89] "etcd-default-k8s-diff-port-593480" [9c3866d1-8a94-420d-ac51-55bcf2f955e1] Running
	I1018 09:35:10.974001 1474687 system_pods.go:89] "kindnet-ptbw6" [5fa3779f-2d5f-4303-8f6b-af5ae96f1fae] Running
	I1018 09:35:10.974006 1474687 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-593480" [35443851-03a6-43b4-b827-6dcd89b14052] Running
	I1018 09:35:10.974011 1474687 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-593480" [9e1e3689-cdbf-48cf-8c1e-4a55e905811d] Running
	I1018 09:35:10.974015 1474687 system_pods.go:89] "kube-proxy-lz9p5" [df6ea9c5-3f27-4e58-be1b-c6f47b71aa63] Running
	I1018 09:35:10.974020 1474687 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-593480" [ab2397f7-d4be-4bc7-98eb-b9ddb0e6a9a2] Running
	I1018 09:35:10.974032 1474687 system_pods.go:89] "storage-provisioner" [da9f578c-74b8-40c2-a810-245c70e07eae] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:35:10.974053 1474687 retry.go:31] will retry after 221.086539ms: missing components: kube-dns
	I1018 09:35:11.227378 1474687 system_pods.go:86] 8 kube-system pods found
	I1018 09:35:11.227412 1474687 system_pods.go:89] "coredns-66bc5c9577-lxwgf" [7dfe7cc5-827f-4a29-932a-943c05bc729e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:35:11.227419 1474687 system_pods.go:89] "etcd-default-k8s-diff-port-593480" [9c3866d1-8a94-420d-ac51-55bcf2f955e1] Running
	I1018 09:35:11.227426 1474687 system_pods.go:89] "kindnet-ptbw6" [5fa3779f-2d5f-4303-8f6b-af5ae96f1fae] Running
	I1018 09:35:11.227430 1474687 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-593480" [35443851-03a6-43b4-b827-6dcd89b14052] Running
	I1018 09:35:11.227434 1474687 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-593480" [9e1e3689-cdbf-48cf-8c1e-4a55e905811d] Running
	I1018 09:35:11.227438 1474687 system_pods.go:89] "kube-proxy-lz9p5" [df6ea9c5-3f27-4e58-be1b-c6f47b71aa63] Running
	I1018 09:35:11.227445 1474687 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-593480" [ab2397f7-d4be-4bc7-98eb-b9ddb0e6a9a2] Running
	I1018 09:35:11.227450 1474687 system_pods.go:89] "storage-provisioner" [da9f578c-74b8-40c2-a810-245c70e07eae] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:35:11.227465 1474687 retry.go:31] will retry after 359.059247ms: missing components: kube-dns
	I1018 09:35:11.591651 1474687 system_pods.go:86] 8 kube-system pods found
	I1018 09:35:11.591680 1474687 system_pods.go:89] "coredns-66bc5c9577-lxwgf" [7dfe7cc5-827f-4a29-932a-943c05bc729e] Running
	I1018 09:35:11.591687 1474687 system_pods.go:89] "etcd-default-k8s-diff-port-593480" [9c3866d1-8a94-420d-ac51-55bcf2f955e1] Running
	I1018 09:35:11.591692 1474687 system_pods.go:89] "kindnet-ptbw6" [5fa3779f-2d5f-4303-8f6b-af5ae96f1fae] Running
	I1018 09:35:11.591696 1474687 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-593480" [35443851-03a6-43b4-b827-6dcd89b14052] Running
	I1018 09:35:11.591700 1474687 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-593480" [9e1e3689-cdbf-48cf-8c1e-4a55e905811d] Running
	I1018 09:35:11.591704 1474687 system_pods.go:89] "kube-proxy-lz9p5" [df6ea9c5-3f27-4e58-be1b-c6f47b71aa63] Running
	I1018 09:35:11.591708 1474687 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-593480" [ab2397f7-d4be-4bc7-98eb-b9ddb0e6a9a2] Running
	I1018 09:35:11.591711 1474687 system_pods.go:89] "storage-provisioner" [da9f578c-74b8-40c2-a810-245c70e07eae] Running
	I1018 09:35:11.591719 1474687 system_pods.go:126] duration metric: took 620.76266ms to wait for k8s-apps to be running ...
	I1018 09:35:11.591731 1474687 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 09:35:11.591788 1474687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:35:11.607085 1474687 system_svc.go:56] duration metric: took 15.349406ms WaitForService to wait for kubelet
	I1018 09:35:11.607109 1474687 kubeadm.go:586] duration metric: took 42.953363535s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:35:11.607128 1474687 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:35:11.610448 1474687 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 09:35:11.610475 1474687 node_conditions.go:123] node cpu capacity is 2
	I1018 09:35:11.610486 1474687 node_conditions.go:105] duration metric: took 3.353063ms to run NodePressure ...
	I1018 09:35:11.610498 1474687 start.go:241] waiting for startup goroutines ...
	I1018 09:35:11.610506 1474687 start.go:246] waiting for cluster config update ...
	I1018 09:35:11.610516 1474687 start.go:255] writing updated cluster config ...
	I1018 09:35:11.610802 1474687 ssh_runner.go:195] Run: rm -f paused
	I1018 09:35:11.619175 1474687 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:35:11.623267 1474687 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lxwgf" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:35:11.629186 1474687 pod_ready.go:94] pod "coredns-66bc5c9577-lxwgf" is "Ready"
	I1018 09:35:11.629210 1474687 pod_ready.go:86] duration metric: took 5.918899ms for pod "coredns-66bc5c9577-lxwgf" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:35:11.632132 1474687 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-593480" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:35:11.637675 1474687 pod_ready.go:94] pod "etcd-default-k8s-diff-port-593480" is "Ready"
	I1018 09:35:11.637748 1474687 pod_ready.go:86] duration metric: took 5.592771ms for pod "etcd-default-k8s-diff-port-593480" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:35:11.646445 1474687 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-593480" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:35:11.652737 1474687 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-593480" is "Ready"
	I1018 09:35:11.652766 1474687 pod_ready.go:86] duration metric: took 6.294159ms for pod "kube-apiserver-default-k8s-diff-port-593480" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:35:11.657197 1474687 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-593480" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:35:12.023696 1474687 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-593480" is "Ready"
	I1018 09:35:12.023723 1474687 pod_ready.go:86] duration metric: took 366.501267ms for pod "kube-controller-manager-default-k8s-diff-port-593480" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:35:12.225055 1474687 pod_ready.go:83] waiting for pod "kube-proxy-lz9p5" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:35:12.623349 1474687 pod_ready.go:94] pod "kube-proxy-lz9p5" is "Ready"
	I1018 09:35:12.623374 1474687 pod_ready.go:86] duration metric: took 398.289755ms for pod "kube-proxy-lz9p5" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:35:12.823706 1474687 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-593480" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:35:13.223594 1474687 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-593480" is "Ready"
	I1018 09:35:13.223626 1474687 pod_ready.go:86] duration metric: took 399.888669ms for pod "kube-scheduler-default-k8s-diff-port-593480" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:35:13.223639 1474687 pod_ready.go:40] duration metric: took 1.604415912s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:35:13.301877 1474687 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 09:35:13.305290 1474687 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-593480" cluster and "default" namespace by default
	I1018 09:35:12.481985 1481740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 09:35:12.500215 1481740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:35:12.518262 1481740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 09:35:12.535625 1481740 provision.go:87] duration metric: took 958.724947ms to configureAuth
	I1018 09:35:12.535656 1481740 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:35:12.535878 1481740 config.go:182] Loaded profile config "newest-cni-250274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:35:12.535994 1481740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250274
	I1018 09:35:12.554366 1481740 main.go:141] libmachine: Using SSH client type: native
	I1018 09:35:12.554803 1481740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34911 <nil> <nil>}
	I1018 09:35:12.554821 1481740 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:35:12.843291 1481740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:35:12.843313 1481740 machine.go:96] duration metric: took 4.863913345s to provisionDockerMachine
	I1018 09:35:12.843324 1481740 start.go:293] postStartSetup for "newest-cni-250274" (driver="docker")
	I1018 09:35:12.843334 1481740 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:35:12.843391 1481740 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:35:12.843449 1481740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250274
	I1018 09:35:12.861749 1481740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34911 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/newest-cni-250274/id_rsa Username:docker}
	I1018 09:35:12.964910 1481740 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:35:12.969111 1481740 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:35:12.969140 1481740 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:35:12.969151 1481740 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-1274243/.minikube/addons for local assets ...
	I1018 09:35:12.969229 1481740 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-1274243/.minikube/files for local assets ...
	I1018 09:35:12.969334 1481740 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem -> 12760972.pem in /etc/ssl/certs
	I1018 09:35:12.969489 1481740 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:35:12.977232 1481740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem --> /etc/ssl/certs/12760972.pem (1708 bytes)
	I1018 09:35:12.995598 1481740 start.go:296] duration metric: took 152.258132ms for postStartSetup
	I1018 09:35:12.995699 1481740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:35:12.995753 1481740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250274
	I1018 09:35:13.015253 1481740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34911 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/newest-cni-250274/id_rsa Username:docker}
	I1018 09:35:13.116800 1481740 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:35:13.121486 1481740 fix.go:56] duration metric: took 5.448972155s for fixHost
	I1018 09:35:13.121512 1481740 start.go:83] releasing machines lock for "newest-cni-250274", held for 5.449028423s
	I1018 09:35:13.121591 1481740 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-250274
	I1018 09:35:13.138663 1481740 ssh_runner.go:195] Run: cat /version.json
	I1018 09:35:13.138745 1481740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250274
	I1018 09:35:13.139088 1481740 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:35:13.139159 1481740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250274
	I1018 09:35:13.157849 1481740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34911 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/newest-cni-250274/id_rsa Username:docker}
	I1018 09:35:13.158304 1481740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34911 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/newest-cni-250274/id_rsa Username:docker}
	I1018 09:35:13.380597 1481740 ssh_runner.go:195] Run: systemctl --version
	I1018 09:35:13.387893 1481740 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:35:13.474779 1481740 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:35:13.480245 1481740 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:35:13.480317 1481740 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:35:13.489559 1481740 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 09:35:13.489580 1481740 start.go:495] detecting cgroup driver to use...
	I1018 09:35:13.489611 1481740 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 09:35:13.489658 1481740 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:35:13.506229 1481740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:35:13.530174 1481740 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:35:13.530234 1481740 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:35:13.549911 1481740 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:35:13.566716 1481740 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:35:13.759046 1481740 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:35:13.888862 1481740 docker.go:234] disabling docker service ...
	I1018 09:35:13.888950 1481740 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:35:13.905196 1481740 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:35:13.920613 1481740 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:35:14.084167 1481740 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:35:14.224030 1481740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:35:14.237413 1481740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:35:14.250763 1481740 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:35:14.250832 1481740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:14.259541 1481740 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 09:35:14.259610 1481740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:14.275347 1481740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:14.284139 1481740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:14.293584 1481740 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:35:14.301331 1481740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:14.316397 1481740 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:14.324990 1481740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:14.334447 1481740 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:35:14.343757 1481740 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:35:14.352870 1481740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:35:14.488953 1481740 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:35:14.626670 1481740 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:35:14.626738 1481740 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:35:14.631882 1481740 start.go:563] Will wait 60s for crictl version
	I1018 09:35:14.631943 1481740 ssh_runner.go:195] Run: which crictl
	I1018 09:35:14.635554 1481740 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:35:14.660118 1481740 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:35:14.660278 1481740 ssh_runner.go:195] Run: crio --version
	I1018 09:35:14.692419 1481740 ssh_runner.go:195] Run: crio --version
	I1018 09:35:14.724831 1481740 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:35:14.727979 1481740 cli_runner.go:164] Run: docker network inspect newest-cni-250274 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:35:14.745664 1481740 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 09:35:14.749471 1481740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:35:14.764773 1481740 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1018 09:35:14.768286 1481740 kubeadm.go:883] updating cluster {Name:newest-cni-250274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-250274 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:35:14.768419 1481740 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:35:14.768503 1481740 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:35:14.801828 1481740 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:35:14.801854 1481740 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:35:14.801911 1481740 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:35:14.826228 1481740 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:35:14.826251 1481740 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:35:14.826259 1481740 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1018 09:35:14.826360 1481740 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-250274 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-250274 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:35:14.826446 1481740 ssh_runner.go:195] Run: crio config
	I1018 09:35:14.905972 1481740 cni.go:84] Creating CNI manager for ""
	I1018 09:35:14.905993 1481740 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:35:14.906020 1481740 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1018 09:35:14.906044 1481740 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-250274 NodeName:newest-cni-250274 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:35:14.906187 1481740 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-250274"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:35:14.906344 1481740 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:35:14.914745 1481740 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:35:14.914860 1481740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:35:14.922783 1481740 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1018 09:35:14.936583 1481740 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:35:14.950976 1481740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1018 09:35:14.964788 1481740 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:35:14.968749 1481740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:35:14.978644 1481740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:35:15.109859 1481740 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:35:15.132454 1481740 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274 for IP: 192.168.76.2
	I1018 09:35:15.132477 1481740 certs.go:195] generating shared ca certs ...
	I1018 09:35:15.132494 1481740 certs.go:227] acquiring lock for ca certs: {Name:mk38b7543cc5a8f0209b20a590824c94ad8d0609 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:35:15.132690 1481740 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.key
	I1018 09:35:15.132760 1481740 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.key
	I1018 09:35:15.132775 1481740 certs.go:257] generating profile certs ...
	I1018 09:35:15.132897 1481740 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/client.key
	I1018 09:35:15.132989 1481740 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/apiserver.key.08fa8726
	I1018 09:35:15.133059 1481740 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/proxy-client.key
	I1018 09:35:15.133219 1481740 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097.pem (1338 bytes)
	W1018 09:35:15.133276 1481740 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097_empty.pem, impossibly tiny 0 bytes
	I1018 09:35:15.133293 1481740 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:35:15.133334 1481740 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:35:15.133379 1481740 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:35:15.133413 1481740 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem (1675 bytes)
	I1018 09:35:15.133491 1481740 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem (1708 bytes)
	I1018 09:35:15.134198 1481740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:35:15.158815 1481740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 09:35:15.184882 1481740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:35:15.209112 1481740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:35:15.230570 1481740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 09:35:15.255541 1481740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 09:35:15.303005 1481740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:35:15.334594 1481740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 09:35:15.354973 1481740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem --> /usr/share/ca-certificates/12760972.pem (1708 bytes)
	I1018 09:35:15.378445 1481740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:35:15.400905 1481740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097.pem --> /usr/share/ca-certificates/1276097.pem (1338 bytes)
	I1018 09:35:15.424111 1481740 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:35:15.439388 1481740 ssh_runner.go:195] Run: openssl version
	I1018 09:35:15.446286 1481740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:35:15.455046 1481740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:35:15.458845 1481740 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:35:15.458926 1481740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:35:15.504387 1481740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:35:15.512106 1481740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1276097.pem && ln -fs /usr/share/ca-certificates/1276097.pem /etc/ssl/certs/1276097.pem"
	I1018 09:35:15.520175 1481740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1276097.pem
	I1018 09:35:15.523930 1481740 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 08:36 /usr/share/ca-certificates/1276097.pem
	I1018 09:35:15.524021 1481740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1276097.pem
	I1018 09:35:15.565230 1481740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1276097.pem /etc/ssl/certs/51391683.0"
	I1018 09:35:15.573270 1481740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12760972.pem && ln -fs /usr/share/ca-certificates/12760972.pem /etc/ssl/certs/12760972.pem"
	I1018 09:35:15.581447 1481740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12760972.pem
	I1018 09:35:15.585095 1481740 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 08:36 /usr/share/ca-certificates/12760972.pem
	I1018 09:35:15.585159 1481740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12760972.pem
	I1018 09:35:15.627708 1481740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12760972.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:35:15.635390 1481740 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:35:15.639295 1481740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 09:35:15.691161 1481740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 09:35:15.743878 1481740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 09:35:15.798855 1481740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 09:35:15.901722 1481740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 09:35:15.990208 1481740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 09:35:16.123903 1481740 kubeadm.go:400] StartCluster: {Name:newest-cni-250274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-250274 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:35:16.124038 1481740 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:35:16.124128 1481740 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:35:16.203415 1481740 cri.go:89] found id: "838ef5430e58bb4a609136dfa74910535190f395496c2bd21432db44c19aaff4"
	I1018 09:35:16.203482 1481740 cri.go:89] found id: "52152d05aeb48008c167a0cc9d9f80e34c5ab6124747ccfbbf79ba25a61db69f"
	I1018 09:35:16.203501 1481740 cri.go:89] found id: "66052a766abf5dba4b7c9118f1e1e91be861206c216d0a3766c7fcebd6504824"
	I1018 09:35:16.203518 1481740 cri.go:89] found id: "89f5e6f41611e1935f1802e4ae146f223304dda14ce071d5b606ea7ceb35d965"
	I1018 09:35:16.203536 1481740 cri.go:89] found id: ""
	I1018 09:35:16.203612 1481740 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 09:35:16.224327 1481740 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:35:16Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:35:16.224461 1481740 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:35:16.238114 1481740 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 09:35:16.238179 1481740 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 09:35:16.238261 1481740 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 09:35:16.248851 1481740 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:35:16.249531 1481740 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-250274" does not appear in /home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 09:35:16.249863 1481740 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-1274243/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-250274" cluster setting kubeconfig missing "newest-cni-250274" context setting]
	I1018 09:35:16.250496 1481740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/kubeconfig: {Name:mk4be33efdd3c492a070116d2c0b6f8b234870aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:35:16.252747 1481740 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 09:35:16.278778 1481740 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1018 09:35:16.278819 1481740 kubeadm.go:601] duration metric: took 40.619299ms to restartPrimaryControlPlane
	I1018 09:35:16.278829 1481740 kubeadm.go:402] duration metric: took 154.935603ms to StartCluster
	I1018 09:35:16.278848 1481740 settings.go:142] acquiring lock: {Name:mk4240955247baece9b9f2d9a5a8e189cb089184 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:35:16.278924 1481740 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 09:35:16.279901 1481740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/kubeconfig: {Name:mk4be33efdd3c492a070116d2c0b6f8b234870aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:35:16.280119 1481740 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:35:16.280488 1481740 config.go:182] Loaded profile config "newest-cni-250274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:35:16.280563 1481740 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:35:16.280675 1481740 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-250274"
	I1018 09:35:16.280695 1481740 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-250274"
	W1018 09:35:16.280706 1481740 addons.go:247] addon storage-provisioner should already be in state true
	I1018 09:35:16.280723 1481740 addons.go:69] Setting dashboard=true in profile "newest-cni-250274"
	I1018 09:35:16.280768 1481740 addons.go:238] Setting addon dashboard=true in "newest-cni-250274"
	W1018 09:35:16.280798 1481740 addons.go:247] addon dashboard should already be in state true
	I1018 09:35:16.280727 1481740 host.go:66] Checking if "newest-cni-250274" exists ...
	I1018 09:35:16.280863 1481740 host.go:66] Checking if "newest-cni-250274" exists ...
	I1018 09:35:16.281319 1481740 cli_runner.go:164] Run: docker container inspect newest-cni-250274 --format={{.State.Status}}
	I1018 09:35:16.281589 1481740 cli_runner.go:164] Run: docker container inspect newest-cni-250274 --format={{.State.Status}}
	I1018 09:35:16.280734 1481740 addons.go:69] Setting default-storageclass=true in profile "newest-cni-250274"
	I1018 09:35:16.281807 1481740 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-250274"
	I1018 09:35:16.282555 1481740 cli_runner.go:164] Run: docker container inspect newest-cni-250274 --format={{.State.Status}}
	I1018 09:35:16.284931 1481740 out.go:179] * Verifying Kubernetes components...
	I1018 09:35:16.295995 1481740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:35:16.348048 1481740 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 09:35:16.351007 1481740 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 09:35:16.351060 1481740 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:35:16.352470 1481740 addons.go:238] Setting addon default-storageclass=true in "newest-cni-250274"
	W1018 09:35:16.352487 1481740 addons.go:247] addon default-storageclass should already be in state true
	I1018 09:35:16.352512 1481740 host.go:66] Checking if "newest-cni-250274" exists ...
	I1018 09:35:16.352956 1481740 cli_runner.go:164] Run: docker container inspect newest-cni-250274 --format={{.State.Status}}
	I1018 09:35:16.355937 1481740 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:35:16.355961 1481740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:35:16.356022 1481740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250274
	I1018 09:35:16.356163 1481740 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 09:35:16.356177 1481740 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 09:35:16.356219 1481740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250274
	I1018 09:35:16.401098 1481740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34911 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/newest-cni-250274/id_rsa Username:docker}
	I1018 09:35:16.406112 1481740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34911 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/newest-cni-250274/id_rsa Username:docker}
	I1018 09:35:16.417491 1481740 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:35:16.417513 1481740 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:35:16.417578 1481740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250274
	I1018 09:35:16.451582 1481740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34911 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/newest-cni-250274/id_rsa Username:docker}
	I1018 09:35:16.682103 1481740 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:35:16.699283 1481740 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:35:16.699381 1481740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:35:16.725360 1481740 api_server.go:72] duration metric: took 445.209726ms to wait for apiserver process to appear ...
	I1018 09:35:16.725426 1481740 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:35:16.725473 1481740 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:35:16.739521 1481740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:35:16.744868 1481740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:35:16.751681 1481740 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 09:35:16.751742 1481740 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 09:35:16.791579 1481740 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 09:35:16.791645 1481740 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 09:35:16.820337 1481740 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 09:35:16.820401 1481740 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 09:35:16.847089 1481740 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 09:35:16.847152 1481740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 09:35:16.910886 1481740 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 09:35:16.910961 1481740 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 09:35:16.991044 1481740 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 09:35:16.991119 1481740 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 09:35:17.042713 1481740 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 09:35:17.042798 1481740 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 09:35:17.058686 1481740 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 09:35:17.058760 1481740 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 09:35:17.077136 1481740 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:35:17.077208 1481740 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 09:35:17.095024 1481740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:35:21.727935 1481740 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1018 09:35:21.728037 1481740 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:35:22.260351 1481740 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1018 09:35:22.260421 1481740 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1018 09:35:22.260452 1481740 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:35:22.305300 1481740 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1018 09:35:22.305371 1481740 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1018 09:35:22.725851 1481740 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:35:22.787455 1481740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.047858986s)
	I1018 09:35:22.835923 1481740 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:35:22.835959 1481740 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:35:23.225579 1481740 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:35:23.253697 1481740 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:35:23.253721 1481740 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:35:23.725915 1481740 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:35:23.741448 1481740 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:35:23.741477 1481740 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:35:24.226193 1481740 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:35:24.244770 1481740 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:35:24.244794 1481740 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:35:24.725829 1481740 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:35:24.734409 1481740 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 09:35:24.735613 1481740 api_server.go:141] control plane version: v1.34.1
	I1018 09:35:24.735645 1481740 api_server.go:131] duration metric: took 8.010197923s to wait for apiserver health ...
	I1018 09:35:24.735655 1481740 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:35:24.784777 1481740 system_pods.go:59] 8 kube-system pods found
	I1018 09:35:24.784813 1481740 system_pods.go:61] "coredns-66bc5c9577-g7kfg" [38b7f130-b2b9-48a2-93bd-ad4c13e911cb] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 09:35:24.784824 1481740 system_pods.go:61] "etcd-newest-cni-250274" [b856dfe7-8c88-4774-9e86-2b971cf7e5f8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:35:24.784831 1481740 system_pods.go:61] "kindnet-p4pv8" [7a400bc4-76f3-4503-b82a-52b0cabbb2a3] Running
	I1018 09:35:24.784839 1481740 system_pods.go:61] "kube-apiserver-newest-cni-250274" [2b020b61-a478-4fd1-9bd8-ae42ae1ab60e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:35:24.784845 1481740 system_pods.go:61] "kube-controller-manager-newest-cni-250274" [54fb4f01-f3c6-4b86-a2e4-48e6656c751e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:35:24.784851 1481740 system_pods.go:61] "kube-proxy-w56ln" [84d08ca5-9902-4380-bd4e-2aac486b22e6] Running
	I1018 09:35:24.784860 1481740 system_pods.go:61] "kube-scheduler-newest-cni-250274" [51b1c6fd-638b-47fa-9f59-e24e2ec914f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:35:24.784866 1481740 system_pods.go:61] "storage-provisioner" [8a360733-56ab-4bc7-ae00-5f7b4d528d8d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 09:35:24.784872 1481740 system_pods.go:74] duration metric: took 49.21114ms to wait for pod list to return data ...
	I1018 09:35:24.784881 1481740 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:35:24.804651 1481740 default_sa.go:45] found service account: "default"
	I1018 09:35:24.804674 1481740 default_sa.go:55] duration metric: took 19.787559ms for default service account to be created ...
	I1018 09:35:24.804686 1481740 kubeadm.go:586] duration metric: took 8.524542239s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 09:35:24.804702 1481740 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:35:24.850210 1481740 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 09:35:24.850253 1481740 node_conditions.go:123] node cpu capacity is 2
	I1018 09:35:24.850266 1481740 node_conditions.go:105] duration metric: took 45.558427ms to run NodePressure ...
	I1018 09:35:24.850344 1481740 start.go:241] waiting for startup goroutines ...
	I1018 09:35:24.931506 1481740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.186557626s)
	I1018 09:35:25.059247 1481740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.964080513s)
	I1018 09:35:25.064223 1481740 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-250274 addons enable metrics-server
	
	I1018 09:35:25.067279 1481740 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1018 09:35:25.070270 1481740 addons.go:514] duration metric: took 8.789690746s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1018 09:35:25.070324 1481740 start.go:246] waiting for cluster config update ...
	I1018 09:35:25.070338 1481740 start.go:255] writing updated cluster config ...
	I1018 09:35:25.070623 1481740 ssh_runner.go:195] Run: rm -f paused
	I1018 09:35:25.195590 1481740 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 09:35:25.199144 1481740 out.go:179] * Done! kubectl is now configured to use "newest-cni-250274" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.563654047Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.578517386Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=4bb1dccb-6ca8-43b9-aea6-64ea91a16092 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.579629732Z" level=info msg="Running pod sandbox: kube-system/kindnet-p4pv8/POD" id=317bcda0-e932-4395-ba8f-dd9649935c89 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.579814425Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.58906412Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=317bcda0-e932-4395-ba8f-dd9649935c89 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.607176394Z" level=info msg="Ran pod sandbox 48e702a5fca0ae09ac175b2a8fec687fe5782aa76db0de09522eda1a06bd1191 with infra container: kube-system/kube-proxy-w56ln/POD" id=4bb1dccb-6ca8-43b9-aea6-64ea91a16092 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.620690939Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=87d28f56-4fff-4bfb-b718-1e55731e6344 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.646350835Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=0fc1cc3f-a192-4e1a-a877-4b61a7b67ef0 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.649909078Z" level=info msg="Creating container: kube-system/kube-proxy-w56ln/kube-proxy" id=8d45a1da-7754-4a5f-9ff4-5cc95dbe9eff name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.695759041Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.709994123Z" level=info msg="Ran pod sandbox 0899e91e5c34e6dd7a7797ef7b66df6f6ec6cf83f33d2c5599ef36befb41d98d with infra container: kube-system/kindnet-p4pv8/POD" id=317bcda0-e932-4395-ba8f-dd9649935c89 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.738715546Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.739217251Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.744193079Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=10a01f6c-a86e-4f13-9ae9-f66644c38d7b name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.753280572Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=14fd6686-278b-490e-897a-ec17058b9aca name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.764554612Z" level=info msg="Creating container: kube-system/kindnet-p4pv8/kindnet-cni" id=fda8aa63-5899-4b29-859f-bcfe2d4adfa9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.764888166Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.790760044Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.791244305Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.887491509Z" level=info msg="Created container 3439d88200c25fc67750117cfd2823cb088f5e44b59989c6f913d4654ded8a9e: kube-system/kindnet-p4pv8/kindnet-cni" id=fda8aa63-5899-4b29-859f-bcfe2d4adfa9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.887882268Z" level=info msg="Created container 21ff0b5d40f5dcd9e0ee439fe9b4b161d060e1a61266bc62955a345f5f20b482: kube-system/kube-proxy-w56ln/kube-proxy" id=8d45a1da-7754-4a5f-9ff4-5cc95dbe9eff name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.903980592Z" level=info msg="Starting container: 3439d88200c25fc67750117cfd2823cb088f5e44b59989c6f913d4654ded8a9e" id=4b7f5136-4b7f-4d70-8b96-a5ad55490289 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.904011369Z" level=info msg="Starting container: 21ff0b5d40f5dcd9e0ee439fe9b4b161d060e1a61266bc62955a345f5f20b482" id=8e5aa497-b325-459e-ab03-345f3cb713f4 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.911241592Z" level=info msg="Started container" PID=1061 containerID=3439d88200c25fc67750117cfd2823cb088f5e44b59989c6f913d4654ded8a9e description=kube-system/kindnet-p4pv8/kindnet-cni id=4b7f5136-4b7f-4d70-8b96-a5ad55490289 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0899e91e5c34e6dd7a7797ef7b66df6f6ec6cf83f33d2c5599ef36befb41d98d
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.920542863Z" level=info msg="Started container" PID=1058 containerID=21ff0b5d40f5dcd9e0ee439fe9b4b161d060e1a61266bc62955a345f5f20b482 description=kube-system/kube-proxy-w56ln/kube-proxy id=8e5aa497-b325-459e-ab03-345f3cb713f4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=48e702a5fca0ae09ac175b2a8fec687fe5782aa76db0de09522eda1a06bd1191
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	3439d88200c25       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   5 seconds ago       Running             kindnet-cni               1                   0899e91e5c34e       kindnet-p4pv8                               kube-system
	21ff0b5d40f5d       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   5 seconds ago       Running             kube-proxy                1                   48e702a5fca0a       kube-proxy-w56ln                            kube-system
	838ef5430e58b       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   13 seconds ago      Running             kube-scheduler            1                   ec768dc9719e2       kube-scheduler-newest-cni-250274            kube-system
	52152d05aeb48       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   13 seconds ago      Running             kube-controller-manager   1                   7d4aa5582ddf1       kube-controller-manager-newest-cni-250274   kube-system
	66052a766abf5       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   13 seconds ago      Running             kube-apiserver            1                   11ee665c5977d       kube-apiserver-newest-cni-250274            kube-system
	89f5e6f41611e       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   13 seconds ago      Running             etcd                      1                   12742abdaf74f       etcd-newest-cni-250274                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-250274
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-250274
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820
	                    minikube.k8s.io/name=newest-cni-250274
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_34_56_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:34:52 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-250274
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:35:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:35:22 +0000   Sat, 18 Oct 2025 09:34:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:35:22 +0000   Sat, 18 Oct 2025 09:34:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:35:22 +0000   Sat, 18 Oct 2025 09:34:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 18 Oct 2025 09:35:22 +0000   Sat, 18 Oct 2025 09:34:48 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-250274
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                c687e818-f7ce-4926-9d94-118c26727656
	  Boot ID:                    65523ab2-bf15-4ba4-9086-da57024e96a9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-250274                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         34s
	  kube-system                 kindnet-p4pv8                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-newest-cni-250274             250m (12%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-newest-cni-250274    200m (10%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-w56ln                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-newest-cni-250274             100m (5%)     0 (0%)      0 (0%)           0 (0%)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 26s                kube-proxy       
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   NodeHasSufficientPID     34s                kubelet          Node newest-cni-250274 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 34s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  34s                kubelet          Node newest-cni-250274 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    34s                kubelet          Node newest-cni-250274 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 34s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           29s                node-controller  Node newest-cni-250274 event: Registered Node newest-cni-250274 in Controller
	  Normal   Starting                 14s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 14s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  14s (x8 over 14s)  kubelet          Node newest-cni-250274 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14s (x8 over 14s)  kubelet          Node newest-cni-250274 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14s (x8 over 14s)  kubelet          Node newest-cni-250274 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4s                 node-controller  Node newest-cni-250274 event: Registered Node newest-cni-250274 in Controller
	
	
	==> dmesg <==
	[  +9.741593] overlayfs: idmapped layers are currently not supported
	[Oct18 09:14] overlayfs: idmapped layers are currently not supported
	[ +29.155522] overlayfs: idmapped layers are currently not supported
	[Oct18 09:15] overlayfs: idmapped layers are currently not supported
	[ +11.661984] kauditd_printk_skb: 8 callbacks suppressed
	[ +12.494912] overlayfs: idmapped layers are currently not supported
	[Oct18 09:16] overlayfs: idmapped layers are currently not supported
	[Oct18 09:18] overlayfs: idmapped layers are currently not supported
	[Oct18 09:20] overlayfs: idmapped layers are currently not supported
	[  +1.396393] overlayfs: idmapped layers are currently not supported
	[Oct18 09:22] overlayfs: idmapped layers are currently not supported
	[ +27.782946] overlayfs: idmapped layers are currently not supported
	[Oct18 09:25] overlayfs: idmapped layers are currently not supported
	[Oct18 09:26] overlayfs: idmapped layers are currently not supported
	[Oct18 09:27] overlayfs: idmapped layers are currently not supported
	[Oct18 09:28] overlayfs: idmapped layers are currently not supported
	[ +34.309418] overlayfs: idmapped layers are currently not supported
	[Oct18 09:30] overlayfs: idmapped layers are currently not supported
	[Oct18 09:31] overlayfs: idmapped layers are currently not supported
	[ +30.479187] overlayfs: idmapped layers are currently not supported
	[Oct18 09:32] overlayfs: idmapped layers are currently not supported
	[Oct18 09:33] overlayfs: idmapped layers are currently not supported
	[Oct18 09:34] overlayfs: idmapped layers are currently not supported
	[ +34.458375] overlayfs: idmapped layers are currently not supported
	[Oct18 09:35] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [89f5e6f41611e1935f1802e4ae146f223304dda14ce071d5b606ea7ceb35d965] <==
	{"level":"warn","ts":"2025-10-18T09:35:20.978305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:21.001411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:21.015306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:21.031460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:21.048633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:21.069811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:21.097682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:21.121474Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:21.134273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:21.152295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:21.180102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:21.198525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:21.217559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:21.231203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:21.249987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:21.274178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:21.293758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:21.320091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:21.336848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:21.365293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:21.386358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:21.414352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:21.441615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:21.456639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:21.561696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49012","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:35:29 up 11:17,  0 user,  load average: 3.17, 3.18, 2.72
	Linux newest-cni-250274 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3439d88200c25fc67750117cfd2823cb088f5e44b59989c6f913d4654ded8a9e] <==
	I1018 09:35:24.028968       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:35:24.029379       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 09:35:24.029539       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:35:24.029556       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:35:24.029614       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:35:24Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:35:24.309660       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:35:24.316104       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:35:24.316201       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:35:24.316363       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [66052a766abf5dba4b7c9118f1e1e91be861206c216d0a3766c7fcebd6504824] <==
	I1018 09:35:22.656307       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:35:22.656407       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 09:35:22.656450       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 09:35:22.657528       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1018 09:35:22.658157       1 aggregator.go:171] initial CRD sync complete...
	I1018 09:35:22.658170       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 09:35:22.658176       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 09:35:22.658182       1 cache.go:39] Caches are synced for autoregister controller
	I1018 09:35:22.659289       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 09:35:22.659305       1 policy_source.go:240] refreshing policies
	I1018 09:35:22.684474       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:35:22.701300       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1018 09:35:22.796039       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 09:35:23.149185       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:35:23.375151       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 09:35:24.424543       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 09:35:24.637905       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 09:35:24.703691       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:35:24.783099       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:35:25.012953       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.127.31"}
	I1018 09:35:25.050787       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.42.111"}
	I1018 09:35:25.958153       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 09:35:26.281684       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 09:35:26.390711       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 09:35:26.429187       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [52152d05aeb48008c167a0cc9d9f80e34c5ab6124747ccfbbf79ba25a61db69f] <==
	I1018 09:35:25.910899       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 09:35:25.917627       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 09:35:25.920409       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 09:35:25.920477       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 09:35:25.920530       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 09:35:25.936195       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1018 09:35:25.941071       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:35:25.941154       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 09:35:25.941191       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 09:35:25.941218       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1018 09:35:25.946444       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 09:35:25.947099       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 09:35:25.947374       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:35:25.947475       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 09:35:25.947667       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 09:35:25.948938       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-250274"
	I1018 09:35:25.950580       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 09:35:25.950848       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 09:35:25.985969       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1018 09:35:25.986037       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 09:35:25.986100       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 09:35:25.986795       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 09:35:25.987028       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:35:25.987043       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 09:35:25.987056       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [21ff0b5d40f5dcd9e0ee439fe9b4b161d060e1a61266bc62955a345f5f20b482] <==
	I1018 09:35:24.438302       1 server_linux.go:53] "Using iptables proxy"
	I1018 09:35:24.713697       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 09:35:24.914579       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:35:24.914624       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 09:35:24.914720       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:35:25.064828       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:35:25.065016       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:35:25.085108       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:35:25.085726       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:35:25.085789       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:35:25.089272       1 config.go:200] "Starting service config controller"
	I1018 09:35:25.089594       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:35:25.089670       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:35:25.089701       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:35:25.089753       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:35:25.089780       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:35:25.090890       1 config.go:309] "Starting node config controller"
	I1018 09:35:25.090966       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:35:25.090997       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 09:35:25.196450       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 09:35:25.197616       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 09:35:25.197645       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [838ef5430e58bb4a609136dfa74910535190f395496c2bd21432db44c19aaff4] <==
	I1018 09:35:22.452689       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:35:22.460582       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 09:35:22.460695       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:35:22.460713       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:35:22.460729       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1018 09:35:22.476430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 09:35:22.488421       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 09:35:22.488536       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 09:35:22.488613       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 09:35:22.488696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 09:35:22.488770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 09:35:22.488851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 09:35:22.488932       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 09:35:22.489013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 09:35:22.489098       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 09:35:22.489166       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 09:35:22.489251       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 09:35:22.489324       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 09:35:22.489712       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 09:35:22.489788       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 09:35:22.489811       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 09:35:22.489823       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 09:35:22.489902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 09:35:22.490205       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1018 09:35:23.565473       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:35:22 newest-cni-250274 kubelet[724]: I1018 09:35:22.458543     724 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-250274"
	Oct 18 09:35:22 newest-cni-250274 kubelet[724]: E1018 09:35:22.800116     724 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-250274\" already exists" pod="kube-system/kube-controller-manager-newest-cni-250274"
	Oct 18 09:35:22 newest-cni-250274 kubelet[724]: I1018 09:35:22.800165     724 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-250274"
	Oct 18 09:35:22 newest-cni-250274 kubelet[724]: E1018 09:35:22.849761     724 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-250274\" already exists" pod="kube-system/kube-scheduler-newest-cni-250274"
	Oct 18 09:35:22 newest-cni-250274 kubelet[724]: I1018 09:35:22.849901     724 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-250274"
	Oct 18 09:35:22 newest-cni-250274 kubelet[724]: I1018 09:35:22.868913     724 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-250274"
	Oct 18 09:35:22 newest-cni-250274 kubelet[724]: I1018 09:35:22.869031     724 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-250274"
	Oct 18 09:35:22 newest-cni-250274 kubelet[724]: I1018 09:35:22.869070     724 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 18 09:35:22 newest-cni-250274 kubelet[724]: I1018 09:35:22.872635     724 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 18 09:35:22 newest-cni-250274 kubelet[724]: E1018 09:35:22.893558     724 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-250274\" already exists" pod="kube-system/etcd-newest-cni-250274"
	Oct 18 09:35:22 newest-cni-250274 kubelet[724]: I1018 09:35:22.900160     724 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-250274"
	Oct 18 09:35:22 newest-cni-250274 kubelet[724]: E1018 09:35:22.948960     724 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-250274\" already exists" pod="kube-system/kube-apiserver-newest-cni-250274"
	Oct 18 09:35:23 newest-cni-250274 kubelet[724]: I1018 09:35:23.236384     724 apiserver.go:52] "Watching apiserver"
	Oct 18 09:35:23 newest-cni-250274 kubelet[724]: I1018 09:35:23.281596     724 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 18 09:35:23 newest-cni-250274 kubelet[724]: I1018 09:35:23.283403     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/7a400bc4-76f3-4503-b82a-52b0cabbb2a3-cni-cfg\") pod \"kindnet-p4pv8\" (UID: \"7a400bc4-76f3-4503-b82a-52b0cabbb2a3\") " pod="kube-system/kindnet-p4pv8"
	Oct 18 09:35:23 newest-cni-250274 kubelet[724]: I1018 09:35:23.321267     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a400bc4-76f3-4503-b82a-52b0cabbb2a3-xtables-lock\") pod \"kindnet-p4pv8\" (UID: \"7a400bc4-76f3-4503-b82a-52b0cabbb2a3\") " pod="kube-system/kindnet-p4pv8"
	Oct 18 09:35:23 newest-cni-250274 kubelet[724]: I1018 09:35:23.321323     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/84d08ca5-9902-4380-bd4e-2aac486b22e6-xtables-lock\") pod \"kube-proxy-w56ln\" (UID: \"84d08ca5-9902-4380-bd4e-2aac486b22e6\") " pod="kube-system/kube-proxy-w56ln"
	Oct 18 09:35:23 newest-cni-250274 kubelet[724]: I1018 09:35:23.321346     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a400bc4-76f3-4503-b82a-52b0cabbb2a3-lib-modules\") pod \"kindnet-p4pv8\" (UID: \"7a400bc4-76f3-4503-b82a-52b0cabbb2a3\") " pod="kube-system/kindnet-p4pv8"
	Oct 18 09:35:23 newest-cni-250274 kubelet[724]: I1018 09:35:23.321365     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/84d08ca5-9902-4380-bd4e-2aac486b22e6-lib-modules\") pod \"kube-proxy-w56ln\" (UID: \"84d08ca5-9902-4380-bd4e-2aac486b22e6\") " pod="kube-system/kube-proxy-w56ln"
	Oct 18 09:35:23 newest-cni-250274 kubelet[724]: I1018 09:35:23.480827     724 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 18 09:35:23 newest-cni-250274 kubelet[724]: W1018 09:35:23.598747     724 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3f010420231a13bf1bf2f85b36b5b332c4bb0a0624b2e0eaeb9d03bc23b53aa4/crio-48e702a5fca0ae09ac175b2a8fec687fe5782aa76db0de09522eda1a06bd1191 WatchSource:0}: Error finding container 48e702a5fca0ae09ac175b2a8fec687fe5782aa76db0de09522eda1a06bd1191: Status 404 returned error can't find the container with id 48e702a5fca0ae09ac175b2a8fec687fe5782aa76db0de09522eda1a06bd1191
	Oct 18 09:35:23 newest-cni-250274 kubelet[724]: W1018 09:35:23.702021     724 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3f010420231a13bf1bf2f85b36b5b332c4bb0a0624b2e0eaeb9d03bc23b53aa4/crio-0899e91e5c34e6dd7a7797ef7b66df6f6ec6cf83f33d2c5599ef36befb41d98d WatchSource:0}: Error finding container 0899e91e5c34e6dd7a7797ef7b66df6f6ec6cf83f33d2c5599ef36befb41d98d: Status 404 returned error can't find the container with id 0899e91e5c34e6dd7a7797ef7b66df6f6ec6cf83f33d2c5599ef36befb41d98d
	Oct 18 09:35:26 newest-cni-250274 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 09:35:27 newest-cni-250274 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 09:35:27 newest-cni-250274 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-250274 -n newest-cni-250274
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-250274 -n newest-cni-250274: exit status 2 (382.790075ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-250274 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-g7kfg storage-provisioner dashboard-metrics-scraper-6ffb444bf9-khvmj kubernetes-dashboard-855c9754f9-wwppf
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-250274 describe pod coredns-66bc5c9577-g7kfg storage-provisioner dashboard-metrics-scraper-6ffb444bf9-khvmj kubernetes-dashboard-855c9754f9-wwppf
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-250274 describe pod coredns-66bc5c9577-g7kfg storage-provisioner dashboard-metrics-scraper-6ffb444bf9-khvmj kubernetes-dashboard-855c9754f9-wwppf: exit status 1 (91.205441ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-g7kfg" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-khvmj" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-wwppf" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-250274 describe pod coredns-66bc5c9577-g7kfg storage-provisioner dashboard-metrics-scraper-6ffb444bf9-khvmj kubernetes-dashboard-855c9754f9-wwppf: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-250274
helpers_test.go:243: (dbg) docker inspect newest-cni-250274:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3f010420231a13bf1bf2f85b36b5b332c4bb0a0624b2e0eaeb9d03bc23b53aa4",
	        "Created": "2025-10-18T09:34:27.497200504Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1481864,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:35:07.723471637Z",
	            "FinishedAt": "2025-10-18T09:35:06.806103641Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/3f010420231a13bf1bf2f85b36b5b332c4bb0a0624b2e0eaeb9d03bc23b53aa4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3f010420231a13bf1bf2f85b36b5b332c4bb0a0624b2e0eaeb9d03bc23b53aa4/hostname",
	        "HostsPath": "/var/lib/docker/containers/3f010420231a13bf1bf2f85b36b5b332c4bb0a0624b2e0eaeb9d03bc23b53aa4/hosts",
	        "LogPath": "/var/lib/docker/containers/3f010420231a13bf1bf2f85b36b5b332c4bb0a0624b2e0eaeb9d03bc23b53aa4/3f010420231a13bf1bf2f85b36b5b332c4bb0a0624b2e0eaeb9d03bc23b53aa4-json.log",
	        "Name": "/newest-cni-250274",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-250274:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-250274",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3f010420231a13bf1bf2f85b36b5b332c4bb0a0624b2e0eaeb9d03bc23b53aa4",
	                "LowerDir": "/var/lib/docker/overlay2/2fe5ff73b8ff26832abe48ec6c179e857bbcb47b7c7fd14f5b5c2bf3f5188935-init/diff:/var/lib/docker/overlay2/60519750c2737db0dfdb37adf468f6414129c2b5dcb7218ea59afbc63bfd537f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2fe5ff73b8ff26832abe48ec6c179e857bbcb47b7c7fd14f5b5c2bf3f5188935/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2fe5ff73b8ff26832abe48ec6c179e857bbcb47b7c7fd14f5b5c2bf3f5188935/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2fe5ff73b8ff26832abe48ec6c179e857bbcb47b7c7fd14f5b5c2bf3f5188935/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-250274",
	                "Source": "/var/lib/docker/volumes/newest-cni-250274/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-250274",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-250274",
	                "name.minikube.sigs.k8s.io": "newest-cni-250274",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2d0e3f942ec6d0500df0555a39cafd7f6b651e02426eecaa379c65e463f60402",
	            "SandboxKey": "/var/run/docker/netns/2d0e3f942ec6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34911"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34912"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34915"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34913"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34914"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-250274": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:fd:88:80:20:0f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "804e7137416d690774484eb7cc39c343cbbb64651a610611c9ac627077f5c75f",
	                    "EndpointID": "84a9e9e6e35125719106e5df451747e12db104567d42132baec6161ef45fe7c0",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-250274",
	                        "3f010420231a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-250274 -n newest-cni-250274
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-250274 -n newest-cni-250274: exit status 2 (333.645074ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-250274 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-250274 logs -n 25: (1.063113895s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p no-preload-886951 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │ 18 Oct 25 09:32 UTC │
	│ start   │ -p no-preload-886951 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │ 18 Oct 25 09:33 UTC │
	│ addons  │ enable metrics-server -p embed-certs-559379 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │                     │
	│ stop    │ -p embed-certs-559379 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:32 UTC │ 18 Oct 25 09:33 UTC │
	│ addons  │ enable dashboard -p embed-certs-559379 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:33 UTC │
	│ start   │ -p embed-certs-559379 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:33 UTC │
	│ image   │ no-preload-886951 image list --format=json                                                                                                                                                                                                    │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:33 UTC │
	│ pause   │ -p no-preload-886951 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │                     │
	│ delete  │ -p no-preload-886951                                                                                                                                                                                                                          │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:33 UTC │
	│ delete  │ -p no-preload-886951                                                                                                                                                                                                                          │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:33 UTC │
	│ delete  │ -p disable-driver-mounts-877810                                                                                                                                                                                                               │ disable-driver-mounts-877810 │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:33 UTC │
	│ start   │ -p default-k8s-diff-port-593480 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-593480 │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:35 UTC │
	│ image   │ embed-certs-559379 image list --format=json                                                                                                                                                                                                   │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │ 18 Oct 25 09:34 UTC │
	│ pause   │ -p embed-certs-559379 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │                     │
	│ delete  │ -p embed-certs-559379                                                                                                                                                                                                                         │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │ 18 Oct 25 09:34 UTC │
	│ delete  │ -p embed-certs-559379                                                                                                                                                                                                                         │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │ 18 Oct 25 09:34 UTC │
	│ start   │ -p newest-cni-250274 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-250274            │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │ 18 Oct 25 09:35 UTC │
	│ addons  │ enable metrics-server -p newest-cni-250274 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-250274            │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │                     │
	│ stop    │ -p newest-cni-250274 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-250274            │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │ 18 Oct 25 09:35 UTC │
	│ addons  │ enable dashboard -p newest-cni-250274 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-250274            │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │ 18 Oct 25 09:35 UTC │
	│ start   │ -p newest-cni-250274 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-250274            │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │ 18 Oct 25 09:35 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-593480 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-593480 │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │                     │
	│ image   │ newest-cni-250274 image list --format=json                                                                                                                                                                                                    │ newest-cni-250274            │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │ 18 Oct 25 09:35 UTC │
	│ stop    │ -p default-k8s-diff-port-593480 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-593480 │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │                     │
	│ pause   │ -p newest-cni-250274 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-250274            │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:35:07
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:35:07.451459 1481740 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:35:07.451656 1481740 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:35:07.451667 1481740 out.go:374] Setting ErrFile to fd 2...
	I1018 09:35:07.451672 1481740 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:35:07.452023 1481740 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 09:35:07.452457 1481740 out.go:368] Setting JSON to false
	I1018 09:35:07.453521 1481740 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":40655,"bootTime":1760739453,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1018 09:35:07.453595 1481740 start.go:141] virtualization:  
	I1018 09:35:07.456720 1481740 out.go:179] * [newest-cni-250274] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 09:35:07.460746 1481740 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 09:35:07.460877 1481740 notify.go:220] Checking for updates...
	I1018 09:35:07.467201 1481740 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:35:07.470266 1481740 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 09:35:07.473319 1481740 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-1274243/.minikube
	I1018 09:35:07.476459 1481740 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 09:35:07.479356 1481740 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:35:07.482864 1481740 config.go:182] Loaded profile config "newest-cni-250274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:35:07.483467 1481740 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:35:07.517721 1481740 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 09:35:07.517837 1481740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:35:07.573757 1481740 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 09:35:07.564612032 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:35:07.573865 1481740 docker.go:318] overlay module found
	I1018 09:35:07.579110 1481740 out.go:179] * Using the docker driver based on existing profile
	I1018 09:35:07.581870 1481740 start.go:305] selected driver: docker
	I1018 09:35:07.581888 1481740 start.go:925] validating driver "docker" against &{Name:newest-cni-250274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-250274 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:35:07.581991 1481740 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:35:07.582704 1481740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:35:07.639922 1481740 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 09:35:07.63089103 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:35:07.640270 1481740 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 09:35:07.640306 1481740 cni.go:84] Creating CNI manager for ""
	I1018 09:35:07.640366 1481740 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:35:07.640410 1481740 start.go:349] cluster config:
	{Name:newest-cni-250274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-250274 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:35:07.643508 1481740 out.go:179] * Starting "newest-cni-250274" primary control-plane node in "newest-cni-250274" cluster
	I1018 09:35:07.646302 1481740 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:35:07.649050 1481740 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:35:07.651891 1481740 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:35:07.651979 1481740 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 09:35:07.651995 1481740 cache.go:58] Caching tarball of preloaded images
	I1018 09:35:07.652061 1481740 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:35:07.652302 1481740 preload.go:233] Found /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 09:35:07.652313 1481740 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 09:35:07.652431 1481740 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/config.json ...
	I1018 09:35:07.672327 1481740 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:35:07.672353 1481740 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:35:07.672373 1481740 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:35:07.672397 1481740 start.go:360] acquireMachinesLock for newest-cni-250274: {Name:mk472d1fdef0a7773f022c5286349dcbff699ada Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:35:07.672472 1481740 start.go:364] duration metric: took 48.179µs to acquireMachinesLock for "newest-cni-250274"
	I1018 09:35:07.672495 1481740 start.go:96] Skipping create...Using existing machine configuration
	I1018 09:35:07.672506 1481740 fix.go:54] fixHost starting: 
	I1018 09:35:07.672769 1481740 cli_runner.go:164] Run: docker container inspect newest-cni-250274 --format={{.State.Status}}
	I1018 09:35:07.689184 1481740 fix.go:112] recreateIfNeeded on newest-cni-250274: state=Stopped err=<nil>
	W1018 09:35:07.689214 1481740 fix.go:138] unexpected machine state, will restart: <nil>
	W1018 09:35:04.429700 1474687 node_ready.go:57] node "default-k8s-diff-port-593480" has "Ready":"False" status (will retry)
	W1018 09:35:06.430055 1474687 node_ready.go:57] node "default-k8s-diff-port-593480" has "Ready":"False" status (will retry)
	I1018 09:35:07.692361 1481740 out.go:252] * Restarting existing docker container for "newest-cni-250274" ...
	I1018 09:35:07.692442 1481740 cli_runner.go:164] Run: docker start newest-cni-250274
	I1018 09:35:07.935274 1481740 cli_runner.go:164] Run: docker container inspect newest-cni-250274 --format={{.State.Status}}
	I1018 09:35:07.957316 1481740 kic.go:430] container "newest-cni-250274" state is running.
	I1018 09:35:07.957748 1481740 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-250274
	I1018 09:35:07.979159 1481740 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/config.json ...
	I1018 09:35:07.979391 1481740 machine.go:93] provisionDockerMachine start ...
	I1018 09:35:07.979451 1481740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250274
	I1018 09:35:08.003355 1481740 main.go:141] libmachine: Using SSH client type: native
	I1018 09:35:08.003689 1481740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34911 <nil> <nil>}
	I1018 09:35:08.003699 1481740 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:35:08.004657 1481740 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1018 09:35:11.179820 1481740 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-250274
	
	I1018 09:35:11.179940 1481740 ubuntu.go:182] provisioning hostname "newest-cni-250274"
	I1018 09:35:11.180047 1481740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250274
	I1018 09:35:11.206494 1481740 main.go:141] libmachine: Using SSH client type: native
	I1018 09:35:11.206893 1481740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34911 <nil> <nil>}
	I1018 09:35:11.206920 1481740 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-250274 && echo "newest-cni-250274" | sudo tee /etc/hostname
	I1018 09:35:11.382677 1481740 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-250274
	
	I1018 09:35:11.382843 1481740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250274
	I1018 09:35:11.410095 1481740 main.go:141] libmachine: Using SSH client type: native
	I1018 09:35:11.410409 1481740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34911 <nil> <nil>}
	I1018 09:35:11.410427 1481740 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-250274' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-250274/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-250274' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:35:11.576823 1481740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:35:11.576848 1481740 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-1274243/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-1274243/.minikube}
	I1018 09:35:11.576870 1481740 ubuntu.go:190] setting up certificates
	I1018 09:35:11.576879 1481740 provision.go:84] configureAuth start
	I1018 09:35:11.576951 1481740 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-250274
	I1018 09:35:11.595769 1481740 provision.go:143] copyHostCerts
	I1018 09:35:11.595828 1481740 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem, removing ...
	I1018 09:35:11.596013 1481740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem
	I1018 09:35:11.596107 1481740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem (1078 bytes)
	I1018 09:35:11.596223 1481740 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem, removing ...
	I1018 09:35:11.596229 1481740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem
	I1018 09:35:11.596257 1481740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem (1123 bytes)
	I1018 09:35:11.596318 1481740 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem, removing ...
	I1018 09:35:11.596323 1481740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem
	I1018 09:35:11.596346 1481740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem (1675 bytes)
	I1018 09:35:11.596401 1481740 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem org=jenkins.newest-cni-250274 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-250274]
	I1018 09:35:12.355708 1481740 provision.go:177] copyRemoteCerts
	I1018 09:35:12.355779 1481740 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:35:12.355831 1481740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250274
	I1018 09:35:12.375529 1481740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34911 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/newest-cni-250274/id_rsa Username:docker}
	W1018 09:35:08.929925 1474687 node_ready.go:57] node "default-k8s-diff-port-593480" has "Ready":"False" status (will retry)
	I1018 09:35:10.929268 1474687 node_ready.go:49] node "default-k8s-diff-port-593480" is "Ready"
	I1018 09:35:10.929306 1474687 node_ready.go:38] duration metric: took 41.002800702s for node "default-k8s-diff-port-593480" to be "Ready" ...
	I1018 09:35:10.929321 1474687 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:35:10.929387 1474687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:35:10.943201 1474687 api_server.go:72] duration metric: took 42.289449947s to wait for apiserver process to appear ...
	I1018 09:35:10.943224 1474687 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:35:10.943243 1474687 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1018 09:35:10.963991 1474687 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1018 09:35:10.965001 1474687 api_server.go:141] control plane version: v1.34.1
	I1018 09:35:10.965026 1474687 api_server.go:131] duration metric: took 21.794732ms to wait for apiserver health ...
	I1018 09:35:10.965035 1474687 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:35:10.968142 1474687 system_pods.go:59] 8 kube-system pods found
	I1018 09:35:10.968179 1474687 system_pods.go:61] "coredns-66bc5c9577-lxwgf" [7dfe7cc5-827f-4a29-932a-943c05bc729e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:35:10.968187 1474687 system_pods.go:61] "etcd-default-k8s-diff-port-593480" [9c3866d1-8a94-420d-ac51-55bcf2f955e1] Running
	I1018 09:35:10.968193 1474687 system_pods.go:61] "kindnet-ptbw6" [5fa3779f-2d5f-4303-8f6b-af5ae96f1fae] Running
	I1018 09:35:10.968198 1474687 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-593480" [35443851-03a6-43b4-b827-6dcd89b14052] Running
	I1018 09:35:10.968204 1474687 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-593480" [9e1e3689-cdbf-48cf-8c1e-4a55e905811d] Running
	I1018 09:35:10.968210 1474687 system_pods.go:61] "kube-proxy-lz9p5" [df6ea9c5-3f27-4e58-be1b-c6f47b71aa63] Running
	I1018 09:35:10.968221 1474687 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-593480" [ab2397f7-d4be-4bc7-98eb-b9ddb0e6a9a2] Running
	I1018 09:35:10.968237 1474687 system_pods.go:61] "storage-provisioner" [da9f578c-74b8-40c2-a810-245c70e07eae] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:35:10.968256 1474687 system_pods.go:74] duration metric: took 3.214188ms to wait for pod list to return data ...
	I1018 09:35:10.968265 1474687 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:35:10.970910 1474687 default_sa.go:45] found service account: "default"
	I1018 09:35:10.970940 1474687 default_sa.go:55] duration metric: took 2.66185ms for default service account to be created ...
	I1018 09:35:10.970949 1474687 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 09:35:10.973952 1474687 system_pods.go:86] 8 kube-system pods found
	I1018 09:35:10.973988 1474687 system_pods.go:89] "coredns-66bc5c9577-lxwgf" [7dfe7cc5-827f-4a29-932a-943c05bc729e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:35:10.973995 1474687 system_pods.go:89] "etcd-default-k8s-diff-port-593480" [9c3866d1-8a94-420d-ac51-55bcf2f955e1] Running
	I1018 09:35:10.974001 1474687 system_pods.go:89] "kindnet-ptbw6" [5fa3779f-2d5f-4303-8f6b-af5ae96f1fae] Running
	I1018 09:35:10.974006 1474687 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-593480" [35443851-03a6-43b4-b827-6dcd89b14052] Running
	I1018 09:35:10.974011 1474687 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-593480" [9e1e3689-cdbf-48cf-8c1e-4a55e905811d] Running
	I1018 09:35:10.974015 1474687 system_pods.go:89] "kube-proxy-lz9p5" [df6ea9c5-3f27-4e58-be1b-c6f47b71aa63] Running
	I1018 09:35:10.974020 1474687 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-593480" [ab2397f7-d4be-4bc7-98eb-b9ddb0e6a9a2] Running
	I1018 09:35:10.974032 1474687 system_pods.go:89] "storage-provisioner" [da9f578c-74b8-40c2-a810-245c70e07eae] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:35:10.974053 1474687 retry.go:31] will retry after 221.086539ms: missing components: kube-dns
	I1018 09:35:11.227378 1474687 system_pods.go:86] 8 kube-system pods found
	I1018 09:35:11.227412 1474687 system_pods.go:89] "coredns-66bc5c9577-lxwgf" [7dfe7cc5-827f-4a29-932a-943c05bc729e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:35:11.227419 1474687 system_pods.go:89] "etcd-default-k8s-diff-port-593480" [9c3866d1-8a94-420d-ac51-55bcf2f955e1] Running
	I1018 09:35:11.227426 1474687 system_pods.go:89] "kindnet-ptbw6" [5fa3779f-2d5f-4303-8f6b-af5ae96f1fae] Running
	I1018 09:35:11.227430 1474687 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-593480" [35443851-03a6-43b4-b827-6dcd89b14052] Running
	I1018 09:35:11.227434 1474687 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-593480" [9e1e3689-cdbf-48cf-8c1e-4a55e905811d] Running
	I1018 09:35:11.227438 1474687 system_pods.go:89] "kube-proxy-lz9p5" [df6ea9c5-3f27-4e58-be1b-c6f47b71aa63] Running
	I1018 09:35:11.227445 1474687 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-593480" [ab2397f7-d4be-4bc7-98eb-b9ddb0e6a9a2] Running
	I1018 09:35:11.227450 1474687 system_pods.go:89] "storage-provisioner" [da9f578c-74b8-40c2-a810-245c70e07eae] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:35:11.227465 1474687 retry.go:31] will retry after 359.059247ms: missing components: kube-dns
	I1018 09:35:11.591651 1474687 system_pods.go:86] 8 kube-system pods found
	I1018 09:35:11.591680 1474687 system_pods.go:89] "coredns-66bc5c9577-lxwgf" [7dfe7cc5-827f-4a29-932a-943c05bc729e] Running
	I1018 09:35:11.591687 1474687 system_pods.go:89] "etcd-default-k8s-diff-port-593480" [9c3866d1-8a94-420d-ac51-55bcf2f955e1] Running
	I1018 09:35:11.591692 1474687 system_pods.go:89] "kindnet-ptbw6" [5fa3779f-2d5f-4303-8f6b-af5ae96f1fae] Running
	I1018 09:35:11.591696 1474687 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-593480" [35443851-03a6-43b4-b827-6dcd89b14052] Running
	I1018 09:35:11.591700 1474687 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-593480" [9e1e3689-cdbf-48cf-8c1e-4a55e905811d] Running
	I1018 09:35:11.591704 1474687 system_pods.go:89] "kube-proxy-lz9p5" [df6ea9c5-3f27-4e58-be1b-c6f47b71aa63] Running
	I1018 09:35:11.591708 1474687 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-593480" [ab2397f7-d4be-4bc7-98eb-b9ddb0e6a9a2] Running
	I1018 09:35:11.591711 1474687 system_pods.go:89] "storage-provisioner" [da9f578c-74b8-40c2-a810-245c70e07eae] Running
	I1018 09:35:11.591719 1474687 system_pods.go:126] duration metric: took 620.76266ms to wait for k8s-apps to be running ...
	I1018 09:35:11.591731 1474687 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 09:35:11.591788 1474687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:35:11.607085 1474687 system_svc.go:56] duration metric: took 15.349406ms WaitForService to wait for kubelet
	I1018 09:35:11.607109 1474687 kubeadm.go:586] duration metric: took 42.953363535s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:35:11.607128 1474687 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:35:11.610448 1474687 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 09:35:11.610475 1474687 node_conditions.go:123] node cpu capacity is 2
	I1018 09:35:11.610486 1474687 node_conditions.go:105] duration metric: took 3.353063ms to run NodePressure ...
	I1018 09:35:11.610498 1474687 start.go:241] waiting for startup goroutines ...
	I1018 09:35:11.610506 1474687 start.go:246] waiting for cluster config update ...
	I1018 09:35:11.610516 1474687 start.go:255] writing updated cluster config ...
	I1018 09:35:11.610802 1474687 ssh_runner.go:195] Run: rm -f paused
	I1018 09:35:11.619175 1474687 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:35:11.623267 1474687 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lxwgf" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:35:11.629186 1474687 pod_ready.go:94] pod "coredns-66bc5c9577-lxwgf" is "Ready"
	I1018 09:35:11.629210 1474687 pod_ready.go:86] duration metric: took 5.918899ms for pod "coredns-66bc5c9577-lxwgf" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:35:11.632132 1474687 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-593480" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:35:11.637675 1474687 pod_ready.go:94] pod "etcd-default-k8s-diff-port-593480" is "Ready"
	I1018 09:35:11.637748 1474687 pod_ready.go:86] duration metric: took 5.592771ms for pod "etcd-default-k8s-diff-port-593480" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:35:11.646445 1474687 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-593480" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:35:11.652737 1474687 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-593480" is "Ready"
	I1018 09:35:11.652766 1474687 pod_ready.go:86] duration metric: took 6.294159ms for pod "kube-apiserver-default-k8s-diff-port-593480" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:35:11.657197 1474687 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-593480" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:35:12.023696 1474687 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-593480" is "Ready"
	I1018 09:35:12.023723 1474687 pod_ready.go:86] duration metric: took 366.501267ms for pod "kube-controller-manager-default-k8s-diff-port-593480" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:35:12.225055 1474687 pod_ready.go:83] waiting for pod "kube-proxy-lz9p5" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:35:12.623349 1474687 pod_ready.go:94] pod "kube-proxy-lz9p5" is "Ready"
	I1018 09:35:12.623374 1474687 pod_ready.go:86] duration metric: took 398.289755ms for pod "kube-proxy-lz9p5" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:35:12.823706 1474687 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-593480" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:35:13.223594 1474687 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-593480" is "Ready"
	I1018 09:35:13.223626 1474687 pod_ready.go:86] duration metric: took 399.888669ms for pod "kube-scheduler-default-k8s-diff-port-593480" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:35:13.223639 1474687 pod_ready.go:40] duration metric: took 1.604415912s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:35:13.301877 1474687 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 09:35:13.305290 1474687 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-593480" cluster and "default" namespace by default
	I1018 09:35:12.481985 1481740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 09:35:12.500215 1481740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:35:12.518262 1481740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 09:35:12.535625 1481740 provision.go:87] duration metric: took 958.724947ms to configureAuth
	I1018 09:35:12.535656 1481740 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:35:12.535878 1481740 config.go:182] Loaded profile config "newest-cni-250274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:35:12.535994 1481740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250274
	I1018 09:35:12.554366 1481740 main.go:141] libmachine: Using SSH client type: native
	I1018 09:35:12.554803 1481740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34911 <nil> <nil>}
	I1018 09:35:12.554821 1481740 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:35:12.843291 1481740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:35:12.843313 1481740 machine.go:96] duration metric: took 4.863913345s to provisionDockerMachine
	I1018 09:35:12.843324 1481740 start.go:293] postStartSetup for "newest-cni-250274" (driver="docker")
	I1018 09:35:12.843334 1481740 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:35:12.843391 1481740 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:35:12.843449 1481740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250274
	I1018 09:35:12.861749 1481740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34911 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/newest-cni-250274/id_rsa Username:docker}
	I1018 09:35:12.964910 1481740 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:35:12.969111 1481740 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:35:12.969140 1481740 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:35:12.969151 1481740 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-1274243/.minikube/addons for local assets ...
	I1018 09:35:12.969229 1481740 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-1274243/.minikube/files for local assets ...
	I1018 09:35:12.969334 1481740 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem -> 12760972.pem in /etc/ssl/certs
	I1018 09:35:12.969489 1481740 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:35:12.977232 1481740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem --> /etc/ssl/certs/12760972.pem (1708 bytes)
	I1018 09:35:12.995598 1481740 start.go:296] duration metric: took 152.258132ms for postStartSetup
	I1018 09:35:12.995699 1481740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:35:12.995753 1481740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250274
	I1018 09:35:13.015253 1481740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34911 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/newest-cni-250274/id_rsa Username:docker}
	I1018 09:35:13.116800 1481740 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:35:13.121486 1481740 fix.go:56] duration metric: took 5.448972155s for fixHost
	I1018 09:35:13.121512 1481740 start.go:83] releasing machines lock for "newest-cni-250274", held for 5.449028423s
	I1018 09:35:13.121591 1481740 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-250274
	I1018 09:35:13.138663 1481740 ssh_runner.go:195] Run: cat /version.json
	I1018 09:35:13.138745 1481740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250274
	I1018 09:35:13.139088 1481740 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:35:13.139159 1481740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250274
	I1018 09:35:13.157849 1481740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34911 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/newest-cni-250274/id_rsa Username:docker}
	I1018 09:35:13.158304 1481740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34911 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/newest-cni-250274/id_rsa Username:docker}
	I1018 09:35:13.380597 1481740 ssh_runner.go:195] Run: systemctl --version
	I1018 09:35:13.387893 1481740 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:35:13.474779 1481740 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:35:13.480245 1481740 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:35:13.480317 1481740 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:35:13.489559 1481740 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 09:35:13.489580 1481740 start.go:495] detecting cgroup driver to use...
	I1018 09:35:13.489611 1481740 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 09:35:13.489658 1481740 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:35:13.506229 1481740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:35:13.530174 1481740 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:35:13.530234 1481740 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:35:13.549911 1481740 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:35:13.566716 1481740 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:35:13.759046 1481740 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:35:13.888862 1481740 docker.go:234] disabling docker service ...
	I1018 09:35:13.888950 1481740 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:35:13.905196 1481740 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:35:13.920613 1481740 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:35:14.084167 1481740 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:35:14.224030 1481740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:35:14.237413 1481740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:35:14.250763 1481740 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:35:14.250832 1481740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:14.259541 1481740 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 09:35:14.259610 1481740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:14.275347 1481740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:14.284139 1481740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:14.293584 1481740 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:35:14.301331 1481740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:14.316397 1481740 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:14.324990 1481740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:14.334447 1481740 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:35:14.343757 1481740 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:35:14.352870 1481740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:35:14.488953 1481740 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:35:14.626670 1481740 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:35:14.626738 1481740 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:35:14.631882 1481740 start.go:563] Will wait 60s for crictl version
	I1018 09:35:14.631943 1481740 ssh_runner.go:195] Run: which crictl
	I1018 09:35:14.635554 1481740 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:35:14.660118 1481740 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:35:14.660278 1481740 ssh_runner.go:195] Run: crio --version
	I1018 09:35:14.692419 1481740 ssh_runner.go:195] Run: crio --version
	I1018 09:35:14.724831 1481740 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:35:14.727979 1481740 cli_runner.go:164] Run: docker network inspect newest-cni-250274 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:35:14.745664 1481740 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 09:35:14.749471 1481740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:35:14.764773 1481740 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1018 09:35:14.768286 1481740 kubeadm.go:883] updating cluster {Name:newest-cni-250274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-250274 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:35:14.768419 1481740 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:35:14.768503 1481740 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:35:14.801828 1481740 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:35:14.801854 1481740 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:35:14.801911 1481740 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:35:14.826228 1481740 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:35:14.826251 1481740 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:35:14.826259 1481740 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1018 09:35:14.826360 1481740 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-250274 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-250274 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:35:14.826446 1481740 ssh_runner.go:195] Run: crio config
	I1018 09:35:14.905972 1481740 cni.go:84] Creating CNI manager for ""
	I1018 09:35:14.905993 1481740 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:35:14.906020 1481740 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1018 09:35:14.906044 1481740 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-250274 NodeName:newest-cni-250274 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:35:14.906187 1481740 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-250274"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:35:14.906344 1481740 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:35:14.914745 1481740 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:35:14.914860 1481740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:35:14.922783 1481740 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1018 09:35:14.936583 1481740 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:35:14.950976 1481740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1018 09:35:14.964788 1481740 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:35:14.968749 1481740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:35:14.978644 1481740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:35:15.109859 1481740 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:35:15.132454 1481740 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274 for IP: 192.168.76.2
	I1018 09:35:15.132477 1481740 certs.go:195] generating shared ca certs ...
	I1018 09:35:15.132494 1481740 certs.go:227] acquiring lock for ca certs: {Name:mk38b7543cc5a8f0209b20a590824c94ad8d0609 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:35:15.132690 1481740 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.key
	I1018 09:35:15.132760 1481740 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.key
	I1018 09:35:15.132775 1481740 certs.go:257] generating profile certs ...
	I1018 09:35:15.132897 1481740 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/client.key
	I1018 09:35:15.132989 1481740 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/apiserver.key.08fa8726
	I1018 09:35:15.133059 1481740 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/proxy-client.key
	I1018 09:35:15.133219 1481740 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097.pem (1338 bytes)
	W1018 09:35:15.133276 1481740 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097_empty.pem, impossibly tiny 0 bytes
	I1018 09:35:15.133293 1481740 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:35:15.133334 1481740 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:35:15.133379 1481740 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:35:15.133413 1481740 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem (1675 bytes)
	I1018 09:35:15.133491 1481740 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem (1708 bytes)
	I1018 09:35:15.134198 1481740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:35:15.158815 1481740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 09:35:15.184882 1481740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:35:15.209112 1481740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:35:15.230570 1481740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 09:35:15.255541 1481740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 09:35:15.303005 1481740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:35:15.334594 1481740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/newest-cni-250274/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 09:35:15.354973 1481740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem --> /usr/share/ca-certificates/12760972.pem (1708 bytes)
	I1018 09:35:15.378445 1481740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:35:15.400905 1481740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097.pem --> /usr/share/ca-certificates/1276097.pem (1338 bytes)
	I1018 09:35:15.424111 1481740 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:35:15.439388 1481740 ssh_runner.go:195] Run: openssl version
	I1018 09:35:15.446286 1481740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:35:15.455046 1481740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:35:15.458845 1481740 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:35:15.458926 1481740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:35:15.504387 1481740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:35:15.512106 1481740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1276097.pem && ln -fs /usr/share/ca-certificates/1276097.pem /etc/ssl/certs/1276097.pem"
	I1018 09:35:15.520175 1481740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1276097.pem
	I1018 09:35:15.523930 1481740 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 08:36 /usr/share/ca-certificates/1276097.pem
	I1018 09:35:15.524021 1481740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1276097.pem
	I1018 09:35:15.565230 1481740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1276097.pem /etc/ssl/certs/51391683.0"
	I1018 09:35:15.573270 1481740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12760972.pem && ln -fs /usr/share/ca-certificates/12760972.pem /etc/ssl/certs/12760972.pem"
	I1018 09:35:15.581447 1481740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12760972.pem
	I1018 09:35:15.585095 1481740 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 08:36 /usr/share/ca-certificates/12760972.pem
	I1018 09:35:15.585159 1481740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12760972.pem
	I1018 09:35:15.627708 1481740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12760972.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:35:15.635390 1481740 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:35:15.639295 1481740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 09:35:15.691161 1481740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 09:35:15.743878 1481740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 09:35:15.798855 1481740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 09:35:15.901722 1481740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 09:35:15.990208 1481740 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 09:35:16.123903 1481740 kubeadm.go:400] StartCluster: {Name:newest-cni-250274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-250274 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:35:16.124038 1481740 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:35:16.124128 1481740 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:35:16.203415 1481740 cri.go:89] found id: "838ef5430e58bb4a609136dfa74910535190f395496c2bd21432db44c19aaff4"
	I1018 09:35:16.203482 1481740 cri.go:89] found id: "52152d05aeb48008c167a0cc9d9f80e34c5ab6124747ccfbbf79ba25a61db69f"
	I1018 09:35:16.203501 1481740 cri.go:89] found id: "66052a766abf5dba4b7c9118f1e1e91be861206c216d0a3766c7fcebd6504824"
	I1018 09:35:16.203518 1481740 cri.go:89] found id: "89f5e6f41611e1935f1802e4ae146f223304dda14ce071d5b606ea7ceb35d965"
	I1018 09:35:16.203536 1481740 cri.go:89] found id: ""
	I1018 09:35:16.203612 1481740 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 09:35:16.224327 1481740 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:35:16Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:35:16.224461 1481740 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:35:16.238114 1481740 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 09:35:16.238179 1481740 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 09:35:16.238261 1481740 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 09:35:16.248851 1481740 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:35:16.249531 1481740 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-250274" does not appear in /home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 09:35:16.249863 1481740 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-1274243/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-250274" cluster setting kubeconfig missing "newest-cni-250274" context setting]
	I1018 09:35:16.250496 1481740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/kubeconfig: {Name:mk4be33efdd3c492a070116d2c0b6f8b234870aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:35:16.252747 1481740 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 09:35:16.278778 1481740 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1018 09:35:16.278819 1481740 kubeadm.go:601] duration metric: took 40.619299ms to restartPrimaryControlPlane
	I1018 09:35:16.278829 1481740 kubeadm.go:402] duration metric: took 154.935603ms to StartCluster
	I1018 09:35:16.278848 1481740 settings.go:142] acquiring lock: {Name:mk4240955247baece9b9f2d9a5a8e189cb089184 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:35:16.278924 1481740 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 09:35:16.279901 1481740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/kubeconfig: {Name:mk4be33efdd3c492a070116d2c0b6f8b234870aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:35:16.280119 1481740 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:35:16.280488 1481740 config.go:182] Loaded profile config "newest-cni-250274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:35:16.280563 1481740 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:35:16.280675 1481740 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-250274"
	I1018 09:35:16.280695 1481740 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-250274"
	W1018 09:35:16.280706 1481740 addons.go:247] addon storage-provisioner should already be in state true
	I1018 09:35:16.280723 1481740 addons.go:69] Setting dashboard=true in profile "newest-cni-250274"
	I1018 09:35:16.280768 1481740 addons.go:238] Setting addon dashboard=true in "newest-cni-250274"
	W1018 09:35:16.280798 1481740 addons.go:247] addon dashboard should already be in state true
	I1018 09:35:16.280727 1481740 host.go:66] Checking if "newest-cni-250274" exists ...
	I1018 09:35:16.280863 1481740 host.go:66] Checking if "newest-cni-250274" exists ...
	I1018 09:35:16.281319 1481740 cli_runner.go:164] Run: docker container inspect newest-cni-250274 --format={{.State.Status}}
	I1018 09:35:16.281589 1481740 cli_runner.go:164] Run: docker container inspect newest-cni-250274 --format={{.State.Status}}
	I1018 09:35:16.280734 1481740 addons.go:69] Setting default-storageclass=true in profile "newest-cni-250274"
	I1018 09:35:16.281807 1481740 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-250274"
	I1018 09:35:16.282555 1481740 cli_runner.go:164] Run: docker container inspect newest-cni-250274 --format={{.State.Status}}
	I1018 09:35:16.284931 1481740 out.go:179] * Verifying Kubernetes components...
	I1018 09:35:16.295995 1481740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:35:16.348048 1481740 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 09:35:16.351007 1481740 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 09:35:16.351060 1481740 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:35:16.352470 1481740 addons.go:238] Setting addon default-storageclass=true in "newest-cni-250274"
	W1018 09:35:16.352487 1481740 addons.go:247] addon default-storageclass should already be in state true
	I1018 09:35:16.352512 1481740 host.go:66] Checking if "newest-cni-250274" exists ...
	I1018 09:35:16.352956 1481740 cli_runner.go:164] Run: docker container inspect newest-cni-250274 --format={{.State.Status}}
	I1018 09:35:16.355937 1481740 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:35:16.355961 1481740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:35:16.356022 1481740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250274
	I1018 09:35:16.356163 1481740 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 09:35:16.356177 1481740 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 09:35:16.356219 1481740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250274
	I1018 09:35:16.401098 1481740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34911 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/newest-cni-250274/id_rsa Username:docker}
	I1018 09:35:16.406112 1481740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34911 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/newest-cni-250274/id_rsa Username:docker}
	I1018 09:35:16.417491 1481740 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:35:16.417513 1481740 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:35:16.417578 1481740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-250274
	I1018 09:35:16.451582 1481740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34911 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/newest-cni-250274/id_rsa Username:docker}
	I1018 09:35:16.682103 1481740 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:35:16.699283 1481740 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:35:16.699381 1481740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:35:16.725360 1481740 api_server.go:72] duration metric: took 445.209726ms to wait for apiserver process to appear ...
	I1018 09:35:16.725426 1481740 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:35:16.725473 1481740 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:35:16.739521 1481740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:35:16.744868 1481740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:35:16.751681 1481740 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 09:35:16.751742 1481740 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 09:35:16.791579 1481740 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 09:35:16.791645 1481740 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 09:35:16.820337 1481740 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 09:35:16.820401 1481740 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 09:35:16.847089 1481740 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 09:35:16.847152 1481740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 09:35:16.910886 1481740 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 09:35:16.910961 1481740 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 09:35:16.991044 1481740 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 09:35:16.991119 1481740 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 09:35:17.042713 1481740 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 09:35:17.042798 1481740 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 09:35:17.058686 1481740 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 09:35:17.058760 1481740 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 09:35:17.077136 1481740 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:35:17.077208 1481740 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 09:35:17.095024 1481740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:35:21.727935 1481740 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1018 09:35:21.728037 1481740 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:35:22.260351 1481740 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1018 09:35:22.260421 1481740 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1018 09:35:22.260452 1481740 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:35:22.305300 1481740 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1018 09:35:22.305371 1481740 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1018 09:35:22.725851 1481740 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:35:22.787455 1481740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.047858986s)
	I1018 09:35:22.835923 1481740 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:35:22.835959 1481740 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:35:23.225579 1481740 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:35:23.253697 1481740 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:35:23.253721 1481740 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:35:23.725915 1481740 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:35:23.741448 1481740 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:35:23.741477 1481740 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:35:24.226193 1481740 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:35:24.244770 1481740 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:35:24.244794 1481740 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:35:24.725829 1481740 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:35:24.734409 1481740 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 09:35:24.735613 1481740 api_server.go:141] control plane version: v1.34.1
	I1018 09:35:24.735645 1481740 api_server.go:131] duration metric: took 8.010197923s to wait for apiserver health ...
	I1018 09:35:24.735655 1481740 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:35:24.784777 1481740 system_pods.go:59] 8 kube-system pods found
	I1018 09:35:24.784813 1481740 system_pods.go:61] "coredns-66bc5c9577-g7kfg" [38b7f130-b2b9-48a2-93bd-ad4c13e911cb] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 09:35:24.784824 1481740 system_pods.go:61] "etcd-newest-cni-250274" [b856dfe7-8c88-4774-9e86-2b971cf7e5f8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:35:24.784831 1481740 system_pods.go:61] "kindnet-p4pv8" [7a400bc4-76f3-4503-b82a-52b0cabbb2a3] Running
	I1018 09:35:24.784839 1481740 system_pods.go:61] "kube-apiserver-newest-cni-250274" [2b020b61-a478-4fd1-9bd8-ae42ae1ab60e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:35:24.784845 1481740 system_pods.go:61] "kube-controller-manager-newest-cni-250274" [54fb4f01-f3c6-4b86-a2e4-48e6656c751e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:35:24.784851 1481740 system_pods.go:61] "kube-proxy-w56ln" [84d08ca5-9902-4380-bd4e-2aac486b22e6] Running
	I1018 09:35:24.784860 1481740 system_pods.go:61] "kube-scheduler-newest-cni-250274" [51b1c6fd-638b-47fa-9f59-e24e2ec914f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:35:24.784866 1481740 system_pods.go:61] "storage-provisioner" [8a360733-56ab-4bc7-ae00-5f7b4d528d8d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 09:35:24.784872 1481740 system_pods.go:74] duration metric: took 49.21114ms to wait for pod list to return data ...
	I1018 09:35:24.784881 1481740 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:35:24.804651 1481740 default_sa.go:45] found service account: "default"
	I1018 09:35:24.804674 1481740 default_sa.go:55] duration metric: took 19.787559ms for default service account to be created ...
	I1018 09:35:24.804686 1481740 kubeadm.go:586] duration metric: took 8.524542239s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 09:35:24.804702 1481740 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:35:24.850210 1481740 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 09:35:24.850253 1481740 node_conditions.go:123] node cpu capacity is 2
	I1018 09:35:24.850266 1481740 node_conditions.go:105] duration metric: took 45.558427ms to run NodePressure ...
	I1018 09:35:24.850344 1481740 start.go:241] waiting for startup goroutines ...
	I1018 09:35:24.931506 1481740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.186557626s)
	I1018 09:35:25.059247 1481740 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.964080513s)
	I1018 09:35:25.064223 1481740 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-250274 addons enable metrics-server
	
	I1018 09:35:25.067279 1481740 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1018 09:35:25.070270 1481740 addons.go:514] duration metric: took 8.789690746s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1018 09:35:25.070324 1481740 start.go:246] waiting for cluster config update ...
	I1018 09:35:25.070338 1481740 start.go:255] writing updated cluster config ...
	I1018 09:35:25.070623 1481740 ssh_runner.go:195] Run: rm -f paused
	I1018 09:35:25.195590 1481740 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 09:35:25.199144 1481740 out.go:179] * Done! kubectl is now configured to use "newest-cni-250274" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.563654047Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.578517386Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=4bb1dccb-6ca8-43b9-aea6-64ea91a16092 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.579629732Z" level=info msg="Running pod sandbox: kube-system/kindnet-p4pv8/POD" id=317bcda0-e932-4395-ba8f-dd9649935c89 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.579814425Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.58906412Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=317bcda0-e932-4395-ba8f-dd9649935c89 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.607176394Z" level=info msg="Ran pod sandbox 48e702a5fca0ae09ac175b2a8fec687fe5782aa76db0de09522eda1a06bd1191 with infra container: kube-system/kube-proxy-w56ln/POD" id=4bb1dccb-6ca8-43b9-aea6-64ea91a16092 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.620690939Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=87d28f56-4fff-4bfb-b718-1e55731e6344 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.646350835Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=0fc1cc3f-a192-4e1a-a877-4b61a7b67ef0 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.649909078Z" level=info msg="Creating container: kube-system/kube-proxy-w56ln/kube-proxy" id=8d45a1da-7754-4a5f-9ff4-5cc95dbe9eff name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.695759041Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.709994123Z" level=info msg="Ran pod sandbox 0899e91e5c34e6dd7a7797ef7b66df6f6ec6cf83f33d2c5599ef36befb41d98d with infra container: kube-system/kindnet-p4pv8/POD" id=317bcda0-e932-4395-ba8f-dd9649935c89 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.738715546Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.739217251Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.744193079Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=10a01f6c-a86e-4f13-9ae9-f66644c38d7b name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.753280572Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=14fd6686-278b-490e-897a-ec17058b9aca name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.764554612Z" level=info msg="Creating container: kube-system/kindnet-p4pv8/kindnet-cni" id=fda8aa63-5899-4b29-859f-bcfe2d4adfa9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.764888166Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.790760044Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.791244305Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.887491509Z" level=info msg="Created container 3439d88200c25fc67750117cfd2823cb088f5e44b59989c6f913d4654ded8a9e: kube-system/kindnet-p4pv8/kindnet-cni" id=fda8aa63-5899-4b29-859f-bcfe2d4adfa9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.887882268Z" level=info msg="Created container 21ff0b5d40f5dcd9e0ee439fe9b4b161d060e1a61266bc62955a345f5f20b482: kube-system/kube-proxy-w56ln/kube-proxy" id=8d45a1da-7754-4a5f-9ff4-5cc95dbe9eff name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.903980592Z" level=info msg="Starting container: 3439d88200c25fc67750117cfd2823cb088f5e44b59989c6f913d4654ded8a9e" id=4b7f5136-4b7f-4d70-8b96-a5ad55490289 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.904011369Z" level=info msg="Starting container: 21ff0b5d40f5dcd9e0ee439fe9b4b161d060e1a61266bc62955a345f5f20b482" id=8e5aa497-b325-459e-ab03-345f3cb713f4 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.911241592Z" level=info msg="Started container" PID=1061 containerID=3439d88200c25fc67750117cfd2823cb088f5e44b59989c6f913d4654ded8a9e description=kube-system/kindnet-p4pv8/kindnet-cni id=4b7f5136-4b7f-4d70-8b96-a5ad55490289 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0899e91e5c34e6dd7a7797ef7b66df6f6ec6cf83f33d2c5599ef36befb41d98d
	Oct 18 09:35:23 newest-cni-250274 crio[609]: time="2025-10-18T09:35:23.920542863Z" level=info msg="Started container" PID=1058 containerID=21ff0b5d40f5dcd9e0ee439fe9b4b161d060e1a61266bc62955a345f5f20b482 description=kube-system/kube-proxy-w56ln/kube-proxy id=8e5aa497-b325-459e-ab03-345f3cb713f4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=48e702a5fca0ae09ac175b2a8fec687fe5782aa76db0de09522eda1a06bd1191
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	3439d88200c25       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 seconds ago       Running             kindnet-cni               1                   0899e91e5c34e       kindnet-p4pv8                               kube-system
	21ff0b5d40f5d       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   7 seconds ago       Running             kube-proxy                1                   48e702a5fca0a       kube-proxy-w56ln                            kube-system
	838ef5430e58b       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   15 seconds ago      Running             kube-scheduler            1                   ec768dc9719e2       kube-scheduler-newest-cni-250274            kube-system
	52152d05aeb48       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   15 seconds ago      Running             kube-controller-manager   1                   7d4aa5582ddf1       kube-controller-manager-newest-cni-250274   kube-system
	66052a766abf5       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   15 seconds ago      Running             kube-apiserver            1                   11ee665c5977d       kube-apiserver-newest-cni-250274            kube-system
	89f5e6f41611e       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   15 seconds ago      Running             etcd                      1                   12742abdaf74f       etcd-newest-cni-250274                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-250274
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-250274
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820
	                    minikube.k8s.io/name=newest-cni-250274
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_34_56_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:34:52 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-250274
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:35:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:35:22 +0000   Sat, 18 Oct 2025 09:34:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:35:22 +0000   Sat, 18 Oct 2025 09:34:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:35:22 +0000   Sat, 18 Oct 2025 09:34:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 18 Oct 2025 09:35:22 +0000   Sat, 18 Oct 2025 09:34:48 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-250274
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                c687e818-f7ce-4926-9d94-118c26727656
	  Boot ID:                    65523ab2-bf15-4ba4-9086-da57024e96a9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-250274                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         36s
	  kube-system                 kindnet-p4pv8                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-newest-cni-250274             250m (12%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-controller-manager-newest-cni-250274    200m (10%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-proxy-w56ln                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-newest-cni-250274             100m (5%)     0 (0%)      0 (0%)           0 (0%)         36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 28s                kube-proxy       
	  Normal   Starting                 6s                 kube-proxy       
	  Normal   NodeHasSufficientPID     36s                kubelet          Node newest-cni-250274 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 36s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  36s                kubelet          Node newest-cni-250274 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    36s                kubelet          Node newest-cni-250274 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 36s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           31s                node-controller  Node newest-cni-250274 event: Registered Node newest-cni-250274 in Controller
	  Normal   Starting                 16s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 16s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  16s (x8 over 16s)  kubelet          Node newest-cni-250274 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16s (x8 over 16s)  kubelet          Node newest-cni-250274 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16s (x8 over 16s)  kubelet          Node newest-cni-250274 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6s                 node-controller  Node newest-cni-250274 event: Registered Node newest-cni-250274 in Controller
	
	
	==> dmesg <==
	[  +9.741593] overlayfs: idmapped layers are currently not supported
	[Oct18 09:14] overlayfs: idmapped layers are currently not supported
	[ +29.155522] overlayfs: idmapped layers are currently not supported
	[Oct18 09:15] overlayfs: idmapped layers are currently not supported
	[ +11.661984] kauditd_printk_skb: 8 callbacks suppressed
	[ +12.494912] overlayfs: idmapped layers are currently not supported
	[Oct18 09:16] overlayfs: idmapped layers are currently not supported
	[Oct18 09:18] overlayfs: idmapped layers are currently not supported
	[Oct18 09:20] overlayfs: idmapped layers are currently not supported
	[  +1.396393] overlayfs: idmapped layers are currently not supported
	[Oct18 09:22] overlayfs: idmapped layers are currently not supported
	[ +27.782946] overlayfs: idmapped layers are currently not supported
	[Oct18 09:25] overlayfs: idmapped layers are currently not supported
	[Oct18 09:26] overlayfs: idmapped layers are currently not supported
	[Oct18 09:27] overlayfs: idmapped layers are currently not supported
	[Oct18 09:28] overlayfs: idmapped layers are currently not supported
	[ +34.309418] overlayfs: idmapped layers are currently not supported
	[Oct18 09:30] overlayfs: idmapped layers are currently not supported
	[Oct18 09:31] overlayfs: idmapped layers are currently not supported
	[ +30.479187] overlayfs: idmapped layers are currently not supported
	[Oct18 09:32] overlayfs: idmapped layers are currently not supported
	[Oct18 09:33] overlayfs: idmapped layers are currently not supported
	[Oct18 09:34] overlayfs: idmapped layers are currently not supported
	[ +34.458375] overlayfs: idmapped layers are currently not supported
	[Oct18 09:35] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [89f5e6f41611e1935f1802e4ae146f223304dda14ce071d5b606ea7ceb35d965] <==
	{"level":"warn","ts":"2025-10-18T09:35:20.978305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:21.001411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:21.015306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:21.031460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:21.048633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:21.069811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:21.097682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:21.121474Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:21.134273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:21.152295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:21.180102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:21.198525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:21.217559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:21.231203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:21.249987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:21.274178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:21.293758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:21.320091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:21.336848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:21.365293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:21.386358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:21.414352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:21.441615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:21.456639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:21.561696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49012","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:35:31 up 11:17,  0 user,  load average: 2.99, 3.15, 2.71
	Linux newest-cni-250274 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3439d88200c25fc67750117cfd2823cb088f5e44b59989c6f913d4654ded8a9e] <==
	I1018 09:35:24.028968       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:35:24.029379       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 09:35:24.029539       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:35:24.029556       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:35:24.029614       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:35:24Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:35:24.309660       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:35:24.316104       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:35:24.316201       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:35:24.316363       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [66052a766abf5dba4b7c9118f1e1e91be861206c216d0a3766c7fcebd6504824] <==
	I1018 09:35:22.656307       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:35:22.656407       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 09:35:22.656450       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 09:35:22.657528       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1018 09:35:22.658157       1 aggregator.go:171] initial CRD sync complete...
	I1018 09:35:22.658170       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 09:35:22.658176       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 09:35:22.658182       1 cache.go:39] Caches are synced for autoregister controller
	I1018 09:35:22.659289       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 09:35:22.659305       1 policy_source.go:240] refreshing policies
	I1018 09:35:22.684474       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:35:22.701300       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1018 09:35:22.796039       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 09:35:23.149185       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:35:23.375151       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 09:35:24.424543       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 09:35:24.637905       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 09:35:24.703691       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:35:24.783099       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:35:25.012953       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.127.31"}
	I1018 09:35:25.050787       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.42.111"}
	I1018 09:35:25.958153       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 09:35:26.281684       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 09:35:26.390711       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 09:35:26.429187       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [52152d05aeb48008c167a0cc9d9f80e34c5ab6124747ccfbbf79ba25a61db69f] <==
	I1018 09:35:25.910899       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 09:35:25.917627       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 09:35:25.920409       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 09:35:25.920477       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 09:35:25.920530       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 09:35:25.936195       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1018 09:35:25.941071       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:35:25.941154       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 09:35:25.941191       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 09:35:25.941218       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1018 09:35:25.946444       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 09:35:25.947099       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 09:35:25.947374       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:35:25.947475       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 09:35:25.947667       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 09:35:25.948938       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-250274"
	I1018 09:35:25.950580       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 09:35:25.950848       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 09:35:25.985969       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1018 09:35:25.986037       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 09:35:25.986100       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 09:35:25.986795       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 09:35:25.987028       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:35:25.987043       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 09:35:25.987056       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [21ff0b5d40f5dcd9e0ee439fe9b4b161d060e1a61266bc62955a345f5f20b482] <==
	I1018 09:35:24.438302       1 server_linux.go:53] "Using iptables proxy"
	I1018 09:35:24.713697       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 09:35:24.914579       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:35:24.914624       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 09:35:24.914720       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:35:25.064828       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:35:25.065016       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:35:25.085108       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:35:25.085726       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:35:25.085789       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:35:25.089272       1 config.go:200] "Starting service config controller"
	I1018 09:35:25.089594       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:35:25.089670       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:35:25.089701       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:35:25.089753       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:35:25.089780       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:35:25.090890       1 config.go:309] "Starting node config controller"
	I1018 09:35:25.090966       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:35:25.090997       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 09:35:25.196450       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 09:35:25.197616       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 09:35:25.197645       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [838ef5430e58bb4a609136dfa74910535190f395496c2bd21432db44c19aaff4] <==
	I1018 09:35:22.452689       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:35:22.460582       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 09:35:22.460695       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:35:22.460713       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:35:22.460729       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1018 09:35:22.476430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1018 09:35:22.488421       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 09:35:22.488536       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 09:35:22.488613       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 09:35:22.488696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 09:35:22.488770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 09:35:22.488851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 09:35:22.488932       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 09:35:22.489013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 09:35:22.489098       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 09:35:22.489166       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 09:35:22.489251       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 09:35:22.489324       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 09:35:22.489712       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 09:35:22.489788       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 09:35:22.489811       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 09:35:22.489823       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 09:35:22.489902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 09:35:22.490205       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1018 09:35:23.565473       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:35:22 newest-cni-250274 kubelet[724]: I1018 09:35:22.458543     724 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-250274"
	Oct 18 09:35:22 newest-cni-250274 kubelet[724]: E1018 09:35:22.800116     724 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-250274\" already exists" pod="kube-system/kube-controller-manager-newest-cni-250274"
	Oct 18 09:35:22 newest-cni-250274 kubelet[724]: I1018 09:35:22.800165     724 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-250274"
	Oct 18 09:35:22 newest-cni-250274 kubelet[724]: E1018 09:35:22.849761     724 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-250274\" already exists" pod="kube-system/kube-scheduler-newest-cni-250274"
	Oct 18 09:35:22 newest-cni-250274 kubelet[724]: I1018 09:35:22.849901     724 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-250274"
	Oct 18 09:35:22 newest-cni-250274 kubelet[724]: I1018 09:35:22.868913     724 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-250274"
	Oct 18 09:35:22 newest-cni-250274 kubelet[724]: I1018 09:35:22.869031     724 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-250274"
	Oct 18 09:35:22 newest-cni-250274 kubelet[724]: I1018 09:35:22.869070     724 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 18 09:35:22 newest-cni-250274 kubelet[724]: I1018 09:35:22.872635     724 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 18 09:35:22 newest-cni-250274 kubelet[724]: E1018 09:35:22.893558     724 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-250274\" already exists" pod="kube-system/etcd-newest-cni-250274"
	Oct 18 09:35:22 newest-cni-250274 kubelet[724]: I1018 09:35:22.900160     724 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-250274"
	Oct 18 09:35:22 newest-cni-250274 kubelet[724]: E1018 09:35:22.948960     724 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-250274\" already exists" pod="kube-system/kube-apiserver-newest-cni-250274"
	Oct 18 09:35:23 newest-cni-250274 kubelet[724]: I1018 09:35:23.236384     724 apiserver.go:52] "Watching apiserver"
	Oct 18 09:35:23 newest-cni-250274 kubelet[724]: I1018 09:35:23.281596     724 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 18 09:35:23 newest-cni-250274 kubelet[724]: I1018 09:35:23.283403     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/7a400bc4-76f3-4503-b82a-52b0cabbb2a3-cni-cfg\") pod \"kindnet-p4pv8\" (UID: \"7a400bc4-76f3-4503-b82a-52b0cabbb2a3\") " pod="kube-system/kindnet-p4pv8"
	Oct 18 09:35:23 newest-cni-250274 kubelet[724]: I1018 09:35:23.321267     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a400bc4-76f3-4503-b82a-52b0cabbb2a3-xtables-lock\") pod \"kindnet-p4pv8\" (UID: \"7a400bc4-76f3-4503-b82a-52b0cabbb2a3\") " pod="kube-system/kindnet-p4pv8"
	Oct 18 09:35:23 newest-cni-250274 kubelet[724]: I1018 09:35:23.321323     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/84d08ca5-9902-4380-bd4e-2aac486b22e6-xtables-lock\") pod \"kube-proxy-w56ln\" (UID: \"84d08ca5-9902-4380-bd4e-2aac486b22e6\") " pod="kube-system/kube-proxy-w56ln"
	Oct 18 09:35:23 newest-cni-250274 kubelet[724]: I1018 09:35:23.321346     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a400bc4-76f3-4503-b82a-52b0cabbb2a3-lib-modules\") pod \"kindnet-p4pv8\" (UID: \"7a400bc4-76f3-4503-b82a-52b0cabbb2a3\") " pod="kube-system/kindnet-p4pv8"
	Oct 18 09:35:23 newest-cni-250274 kubelet[724]: I1018 09:35:23.321365     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/84d08ca5-9902-4380-bd4e-2aac486b22e6-lib-modules\") pod \"kube-proxy-w56ln\" (UID: \"84d08ca5-9902-4380-bd4e-2aac486b22e6\") " pod="kube-system/kube-proxy-w56ln"
	Oct 18 09:35:23 newest-cni-250274 kubelet[724]: I1018 09:35:23.480827     724 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 18 09:35:23 newest-cni-250274 kubelet[724]: W1018 09:35:23.598747     724 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3f010420231a13bf1bf2f85b36b5b332c4bb0a0624b2e0eaeb9d03bc23b53aa4/crio-48e702a5fca0ae09ac175b2a8fec687fe5782aa76db0de09522eda1a06bd1191 WatchSource:0}: Error finding container 48e702a5fca0ae09ac175b2a8fec687fe5782aa76db0de09522eda1a06bd1191: Status 404 returned error can't find the container with id 48e702a5fca0ae09ac175b2a8fec687fe5782aa76db0de09522eda1a06bd1191
	Oct 18 09:35:23 newest-cni-250274 kubelet[724]: W1018 09:35:23.702021     724 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3f010420231a13bf1bf2f85b36b5b332c4bb0a0624b2e0eaeb9d03bc23b53aa4/crio-0899e91e5c34e6dd7a7797ef7b66df6f6ec6cf83f33d2c5599ef36befb41d98d WatchSource:0}: Error finding container 0899e91e5c34e6dd7a7797ef7b66df6f6ec6cf83f33d2c5599ef36befb41d98d: Status 404 returned error can't find the container with id 0899e91e5c34e6dd7a7797ef7b66df6f6ec6cf83f33d2c5599ef36befb41d98d
	Oct 18 09:35:26 newest-cni-250274 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 09:35:27 newest-cni-250274 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 09:35:27 newest-cni-250274 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-250274 -n newest-cni-250274
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-250274 -n newest-cni-250274: exit status 2 (355.189704ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-250274 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-g7kfg storage-provisioner dashboard-metrics-scraper-6ffb444bf9-khvmj kubernetes-dashboard-855c9754f9-wwppf
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-250274 describe pod coredns-66bc5c9577-g7kfg storage-provisioner dashboard-metrics-scraper-6ffb444bf9-khvmj kubernetes-dashboard-855c9754f9-wwppf
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-250274 describe pod coredns-66bc5c9577-g7kfg storage-provisioner dashboard-metrics-scraper-6ffb444bf9-khvmj kubernetes-dashboard-855c9754f9-wwppf: exit status 1 (90.613933ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-g7kfg" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-khvmj" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-wwppf" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-250274 describe pod coredns-66bc5c9577-g7kfg storage-provisioner dashboard-metrics-scraper-6ffb444bf9-khvmj kubernetes-dashboard-855c9754f9-wwppf: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (7.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-593480 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-593480 --alsologtostderr -v=1: exit status 80 (2.002613502s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-593480 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:36:54.306052 1490300 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:36:54.306249 1490300 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:36:54.306262 1490300 out.go:374] Setting ErrFile to fd 2...
	I1018 09:36:54.306267 1490300 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:36:54.306538 1490300 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 09:36:54.306919 1490300 out.go:368] Setting JSON to false
	I1018 09:36:54.306957 1490300 mustload.go:65] Loading cluster: default-k8s-diff-port-593480
	I1018 09:36:54.307421 1490300 config.go:182] Loaded profile config "default-k8s-diff-port-593480": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:36:54.308059 1490300 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-593480 --format={{.State.Status}}
	I1018 09:36:54.328256 1490300 host.go:66] Checking if "default-k8s-diff-port-593480" exists ...
	I1018 09:36:54.328622 1490300 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:36:54.396782 1490300 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-18 09:36:54.386711441 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:36:54.400454 1490300 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-593480 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1018 09:36:54.406074 1490300 out.go:179] * Pausing node default-k8s-diff-port-593480 ... 
	I1018 09:36:54.409003 1490300 host.go:66] Checking if "default-k8s-diff-port-593480" exists ...
	I1018 09:36:54.409354 1490300 ssh_runner.go:195] Run: systemctl --version
	I1018 09:36:54.409405 1490300 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-593480
	I1018 09:36:54.425774 1490300 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34921 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/default-k8s-diff-port-593480/id_rsa Username:docker}
	I1018 09:36:54.530618 1490300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:36:54.544084 1490300 pause.go:52] kubelet running: true
	I1018 09:36:54.544150 1490300 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:36:54.795308 1490300 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:36:54.795397 1490300 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:36:54.867483 1490300 cri.go:89] found id: "77edd5912d990436794cf936b8f51159dc4b9c1c9baaa23fc03d051c5c9c7c44"
	I1018 09:36:54.867505 1490300 cri.go:89] found id: "e96bf01e397dc74fec93b72b52bc80ee6fe7bddee4e09809a7a655beb5a2e18a"
	I1018 09:36:54.867511 1490300 cri.go:89] found id: "7f7413ae9355d190c2c94e35f835f62c3b9bfedcd668a89a6a63bee7beadb8e8"
	I1018 09:36:54.867515 1490300 cri.go:89] found id: "219038721a04310118069d66f5e074f6d504bd7804e061291016a223d0b92b7c"
	I1018 09:36:54.867518 1490300 cri.go:89] found id: "40b8d460478714d481d2976c4d0eab5fc8a6be7829e3e42b66c70ad0ca58af09"
	I1018 09:36:54.867522 1490300 cri.go:89] found id: "a2ae42e7111f68e250d80963ab8db67a0cbd21a5286168c732b5ae60441c17b7"
	I1018 09:36:54.867526 1490300 cri.go:89] found id: "52621647e0872882c5501e4bb01f9aa34bd6d544528f4617f5c91ad85298df0c"
	I1018 09:36:54.867530 1490300 cri.go:89] found id: "f25f31e0de7b14e6ec30c9543448ad6c36163463aa5bb218aac0f99a95ccfe92"
	I1018 09:36:54.867533 1490300 cri.go:89] found id: "733c7cf0be6cd400ab00223c34a62e45c087d4073aeca1345162c44182d78944"
	I1018 09:36:54.867542 1490300 cri.go:89] found id: "c86fad3e7dc491c95dc3fffaf85c7f92e7712435dfa9dc8dc3aad44d435ee3b6"
	I1018 09:36:54.867546 1490300 cri.go:89] found id: "972710a1973b9cf8acbd41550a4a3ebfb5ec96b320e8f9397a2deaf9b46c3e0c"
	I1018 09:36:54.867549 1490300 cri.go:89] found id: ""
	I1018 09:36:54.867641 1490300 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:36:54.878576 1490300 retry.go:31] will retry after 273.094716ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:36:54Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:36:55.151988 1490300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:36:55.167145 1490300 pause.go:52] kubelet running: false
	I1018 09:36:55.167233 1490300 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:36:55.360738 1490300 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:36:55.360857 1490300 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:36:55.426776 1490300 cri.go:89] found id: "77edd5912d990436794cf936b8f51159dc4b9c1c9baaa23fc03d051c5c9c7c44"
	I1018 09:36:55.426800 1490300 cri.go:89] found id: "e96bf01e397dc74fec93b72b52bc80ee6fe7bddee4e09809a7a655beb5a2e18a"
	I1018 09:36:55.426805 1490300 cri.go:89] found id: "7f7413ae9355d190c2c94e35f835f62c3b9bfedcd668a89a6a63bee7beadb8e8"
	I1018 09:36:55.426809 1490300 cri.go:89] found id: "219038721a04310118069d66f5e074f6d504bd7804e061291016a223d0b92b7c"
	I1018 09:36:55.426813 1490300 cri.go:89] found id: "40b8d460478714d481d2976c4d0eab5fc8a6be7829e3e42b66c70ad0ca58af09"
	I1018 09:36:55.426816 1490300 cri.go:89] found id: "a2ae42e7111f68e250d80963ab8db67a0cbd21a5286168c732b5ae60441c17b7"
	I1018 09:36:55.426820 1490300 cri.go:89] found id: "52621647e0872882c5501e4bb01f9aa34bd6d544528f4617f5c91ad85298df0c"
	I1018 09:36:55.426823 1490300 cri.go:89] found id: "f25f31e0de7b14e6ec30c9543448ad6c36163463aa5bb218aac0f99a95ccfe92"
	I1018 09:36:55.426827 1490300 cri.go:89] found id: "733c7cf0be6cd400ab00223c34a62e45c087d4073aeca1345162c44182d78944"
	I1018 09:36:55.426853 1490300 cri.go:89] found id: "c86fad3e7dc491c95dc3fffaf85c7f92e7712435dfa9dc8dc3aad44d435ee3b6"
	I1018 09:36:55.426863 1490300 cri.go:89] found id: "972710a1973b9cf8acbd41550a4a3ebfb5ec96b320e8f9397a2deaf9b46c3e0c"
	I1018 09:36:55.426872 1490300 cri.go:89] found id: ""
	I1018 09:36:55.426943 1490300 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:36:55.439192 1490300 retry.go:31] will retry after 502.897565ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:36:55Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:36:55.942920 1490300 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:36:55.956465 1490300 pause.go:52] kubelet running: false
	I1018 09:36:55.956531 1490300 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:36:56.136498 1490300 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:36:56.136624 1490300 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:36:56.221948 1490300 cri.go:89] found id: "77edd5912d990436794cf936b8f51159dc4b9c1c9baaa23fc03d051c5c9c7c44"
	I1018 09:36:56.221971 1490300 cri.go:89] found id: "e96bf01e397dc74fec93b72b52bc80ee6fe7bddee4e09809a7a655beb5a2e18a"
	I1018 09:36:56.221976 1490300 cri.go:89] found id: "7f7413ae9355d190c2c94e35f835f62c3b9bfedcd668a89a6a63bee7beadb8e8"
	I1018 09:36:56.221980 1490300 cri.go:89] found id: "219038721a04310118069d66f5e074f6d504bd7804e061291016a223d0b92b7c"
	I1018 09:36:56.221982 1490300 cri.go:89] found id: "40b8d460478714d481d2976c4d0eab5fc8a6be7829e3e42b66c70ad0ca58af09"
	I1018 09:36:56.221986 1490300 cri.go:89] found id: "a2ae42e7111f68e250d80963ab8db67a0cbd21a5286168c732b5ae60441c17b7"
	I1018 09:36:56.221989 1490300 cri.go:89] found id: "52621647e0872882c5501e4bb01f9aa34bd6d544528f4617f5c91ad85298df0c"
	I1018 09:36:56.221993 1490300 cri.go:89] found id: "f25f31e0de7b14e6ec30c9543448ad6c36163463aa5bb218aac0f99a95ccfe92"
	I1018 09:36:56.221996 1490300 cri.go:89] found id: "733c7cf0be6cd400ab00223c34a62e45c087d4073aeca1345162c44182d78944"
	I1018 09:36:56.222002 1490300 cri.go:89] found id: "c86fad3e7dc491c95dc3fffaf85c7f92e7712435dfa9dc8dc3aad44d435ee3b6"
	I1018 09:36:56.222029 1490300 cri.go:89] found id: "972710a1973b9cf8acbd41550a4a3ebfb5ec96b320e8f9397a2deaf9b46c3e0c"
	I1018 09:36:56.222048 1490300 cri.go:89] found id: ""
	I1018 09:36:56.222118 1490300 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:36:56.242759 1490300 out.go:203] 
	W1018 09:36:56.245697 1490300 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:36:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:36:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:36:56.245719 1490300 out.go:285] * 
	* 
	W1018 09:36:56.255280 1490300 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:36:56.258343 1490300 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-593480 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-593480
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-593480:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bfa509b1b05342b217b1ce963a2db530ef024766b92757db9c1cb5a7153e9679",
	        "Created": "2025-10-18T09:33:54.439784864Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1486570,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:35:40.932890437Z",
	            "FinishedAt": "2025-10-18T09:35:38.186218456Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/bfa509b1b05342b217b1ce963a2db530ef024766b92757db9c1cb5a7153e9679/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bfa509b1b05342b217b1ce963a2db530ef024766b92757db9c1cb5a7153e9679/hostname",
	        "HostsPath": "/var/lib/docker/containers/bfa509b1b05342b217b1ce963a2db530ef024766b92757db9c1cb5a7153e9679/hosts",
	        "LogPath": "/var/lib/docker/containers/bfa509b1b05342b217b1ce963a2db530ef024766b92757db9c1cb5a7153e9679/bfa509b1b05342b217b1ce963a2db530ef024766b92757db9c1cb5a7153e9679-json.log",
	        "Name": "/default-k8s-diff-port-593480",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-593480:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-593480",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bfa509b1b05342b217b1ce963a2db530ef024766b92757db9c1cb5a7153e9679",
	                "LowerDir": "/var/lib/docker/overlay2/29fb3a737dd79177123811bfa1da8769085432f42a0fccd01372ece37a6300a7-init/diff:/var/lib/docker/overlay2/60519750c2737db0dfdb37adf468f6414129c2b5dcb7218ea59afbc63bfd537f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/29fb3a737dd79177123811bfa1da8769085432f42a0fccd01372ece37a6300a7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/29fb3a737dd79177123811bfa1da8769085432f42a0fccd01372ece37a6300a7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/29fb3a737dd79177123811bfa1da8769085432f42a0fccd01372ece37a6300a7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-593480",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-593480/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-593480",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-593480",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-593480",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "afd171acdd1090977f4160057f319ed6d18dd04b4eb1326197c8da9a66878efc",
	            "SandboxKey": "/var/run/docker/netns/afd171acdd10",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34921"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34922"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34925"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34923"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34924"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-593480": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:b1:0c:5b:48:93",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1dd19821ca12a42bf31368ca6b87d68bd1622c2ff94469b47f038636ec26347a",
	                    "EndpointID": "d8e18e03504fb448faf3b46baccad74dd11e879eff1380c00648442919c324f3",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-593480",
	                        "bfa509b1b053"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-593480 -n default-k8s-diff-port-593480
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-593480 -n default-k8s-diff-port-593480: exit status 2 (329.890345ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-593480 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-593480 logs -n 25: (1.450122891s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p no-preload-886951 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │                     │
	│ delete  │ -p no-preload-886951                                                                                                                                                                                                                          │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:33 UTC │
	│ delete  │ -p no-preload-886951                                                                                                                                                                                                                          │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:33 UTC │
	│ delete  │ -p disable-driver-mounts-877810                                                                                                                                                                                                               │ disable-driver-mounts-877810 │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:33 UTC │
	│ start   │ -p default-k8s-diff-port-593480 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-593480 │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:35 UTC │
	│ image   │ embed-certs-559379 image list --format=json                                                                                                                                                                                                   │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │ 18 Oct 25 09:34 UTC │
	│ pause   │ -p embed-certs-559379 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │                     │
	│ delete  │ -p embed-certs-559379                                                                                                                                                                                                                         │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │ 18 Oct 25 09:34 UTC │
	│ delete  │ -p embed-certs-559379                                                                                                                                                                                                                         │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │ 18 Oct 25 09:34 UTC │
	│ start   │ -p newest-cni-250274 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-250274            │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │ 18 Oct 25 09:35 UTC │
	│ addons  │ enable metrics-server -p newest-cni-250274 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-250274            │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │                     │
	│ stop    │ -p newest-cni-250274 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-250274            │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │ 18 Oct 25 09:35 UTC │
	│ addons  │ enable dashboard -p newest-cni-250274 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-250274            │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │ 18 Oct 25 09:35 UTC │
	│ start   │ -p newest-cni-250274 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-250274            │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │ 18 Oct 25 09:35 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-593480 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-593480 │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │                     │
	│ image   │ newest-cni-250274 image list --format=json                                                                                                                                                                                                    │ newest-cni-250274            │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │ 18 Oct 25 09:35 UTC │
	│ stop    │ -p default-k8s-diff-port-593480 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-593480 │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │ 18 Oct 25 09:35 UTC │
	│ pause   │ -p newest-cni-250274 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-250274            │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │                     │
	│ delete  │ -p newest-cni-250274                                                                                                                                                                                                                          │ newest-cni-250274            │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │ 18 Oct 25 09:35 UTC │
	│ delete  │ -p newest-cni-250274                                                                                                                                                                                                                          │ newest-cni-250274            │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │ 18 Oct 25 09:35 UTC │
	│ start   │ -p auto-275703 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-275703                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-593480 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-593480 │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │ 18 Oct 25 09:35 UTC │
	│ start   │ -p default-k8s-diff-port-593480 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-593480 │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │ 18 Oct 25 09:36 UTC │
	│ image   │ default-k8s-diff-port-593480 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-593480 │ jenkins │ v1.37.0 │ 18 Oct 25 09:36 UTC │ 18 Oct 25 09:36 UTC │
	│ pause   │ -p default-k8s-diff-port-593480 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-593480 │ jenkins │ v1.37.0 │ 18 Oct 25 09:36 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:35:40
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:35:40.410325 1486102 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:35:40.410429 1486102 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:35:40.410435 1486102 out.go:374] Setting ErrFile to fd 2...
	I1018 09:35:40.410439 1486102 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:35:40.410769 1486102 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 09:35:40.411193 1486102 out.go:368] Setting JSON to false
	I1018 09:35:40.412140 1486102 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":40688,"bootTime":1760739453,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1018 09:35:40.412232 1486102 start.go:141] virtualization:  
	I1018 09:35:40.415660 1486102 out.go:179] * [default-k8s-diff-port-593480] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 09:35:40.420071 1486102 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 09:35:40.420232 1486102 notify.go:220] Checking for updates...
	I1018 09:35:40.423623 1486102 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:35:40.427168 1486102 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 09:35:40.430236 1486102 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-1274243/.minikube
	I1018 09:35:40.433128 1486102 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 09:35:40.436560 1486102 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:35:40.439981 1486102 config.go:182] Loaded profile config "default-k8s-diff-port-593480": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:35:40.440624 1486102 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:35:40.480237 1486102 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 09:35:40.480341 1486102 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:35:40.625387 1486102 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-10-18 09:35:40.612686637 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:35:40.625492 1486102 docker.go:318] overlay module found
	I1018 09:35:40.628975 1486102 out.go:179] * Using the docker driver based on existing profile
	I1018 09:35:40.632089 1486102 start.go:305] selected driver: docker
	I1018 09:35:40.632105 1486102 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-593480 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-593480 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:35:40.632205 1486102 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:35:40.633013 1486102 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:35:40.780240 1486102 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2025-10-18 09:35:40.764789613 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:35:40.780617 1486102 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:35:40.780638 1486102 cni.go:84] Creating CNI manager for ""
	I1018 09:35:40.780687 1486102 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:35:40.780725 1486102 start.go:349] cluster config:
	{Name:default-k8s-diff-port-593480 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-593480 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:35:40.785015 1486102 out.go:179] * Starting "default-k8s-diff-port-593480" primary control-plane node in "default-k8s-diff-port-593480" cluster
	I1018 09:35:40.788109 1486102 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:35:40.791057 1486102 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:35:40.793923 1486102 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:35:40.793979 1486102 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 09:35:40.794022 1486102 cache.go:58] Caching tarball of preloaded images
	I1018 09:35:40.794108 1486102 preload.go:233] Found /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 09:35:40.794116 1486102 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 09:35:40.794222 1486102 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/config.json ...
	I1018 09:35:40.794387 1486102 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:35:40.842980 1486102 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:35:40.843001 1486102 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:35:40.843014 1486102 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:35:40.843035 1486102 start.go:360] acquireMachinesLock for default-k8s-diff-port-593480: {Name:mk139126e1ddb766657a5fd510c1f904e5550412 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:35:40.843095 1486102 start.go:364] duration metric: took 38.637µs to acquireMachinesLock for "default-k8s-diff-port-593480"
	I1018 09:35:40.843114 1486102 start.go:96] Skipping create...Using existing machine configuration
	I1018 09:35:40.843120 1486102 fix.go:54] fixHost starting: 
	I1018 09:35:40.843373 1486102 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-593480 --format={{.State.Status}}
	I1018 09:35:40.874865 1486102 fix.go:112] recreateIfNeeded on default-k8s-diff-port-593480: state=Stopped err=<nil>
	W1018 09:35:40.874893 1486102 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 09:35:40.042570 1485573 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-275703:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.452431146s)
	I1018 09:35:40.042611 1485573 kic.go:203] duration metric: took 4.452627835s to extract preloaded images to volume ...
	W1018 09:35:40.042781 1485573 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 09:35:40.042954 1485573 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 09:35:40.128682 1485573 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-275703 --name auto-275703 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-275703 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-275703 --network auto-275703 --ip 192.168.76.2 --volume auto-275703:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 09:35:40.502938 1485573 cli_runner.go:164] Run: docker container inspect auto-275703 --format={{.State.Running}}
	I1018 09:35:40.534556 1485573 cli_runner.go:164] Run: docker container inspect auto-275703 --format={{.State.Status}}
	I1018 09:35:40.592873 1485573 cli_runner.go:164] Run: docker exec auto-275703 stat /var/lib/dpkg/alternatives/iptables
	I1018 09:35:40.681972 1485573 oci.go:144] the created container "auto-275703" has a running status.
	I1018 09:35:40.682020 1485573 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/auto-275703/id_rsa...
	I1018 09:35:41.919291 1485573 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/auto-275703/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 09:35:41.951589 1485573 cli_runner.go:164] Run: docker container inspect auto-275703 --format={{.State.Status}}
	I1018 09:35:41.978188 1485573 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 09:35:41.978207 1485573 kic_runner.go:114] Args: [docker exec --privileged auto-275703 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 09:35:42.057190 1485573 cli_runner.go:164] Run: docker container inspect auto-275703 --format={{.State.Status}}
	I1018 09:35:42.079632 1485573 machine.go:93] provisionDockerMachine start ...
	I1018 09:35:42.079759 1485573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-275703
	I1018 09:35:42.127028 1485573 main.go:141] libmachine: Using SSH client type: native
	I1018 09:35:42.127415 1485573 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34916 <nil> <nil>}
	I1018 09:35:42.127426 1485573 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:35:42.436060 1485573 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-275703
	
	I1018 09:35:42.436087 1485573 ubuntu.go:182] provisioning hostname "auto-275703"
	I1018 09:35:42.436147 1485573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-275703
	I1018 09:35:42.454147 1485573 main.go:141] libmachine: Using SSH client type: native
	I1018 09:35:42.454450 1485573 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34916 <nil> <nil>}
	I1018 09:35:42.454461 1485573 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-275703 && echo "auto-275703" | sudo tee /etc/hostname
	I1018 09:35:42.624105 1485573 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-275703
	
	I1018 09:35:42.624199 1485573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-275703
	I1018 09:35:42.642273 1485573 main.go:141] libmachine: Using SSH client type: native
	I1018 09:35:42.642602 1485573 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34916 <nil> <nil>}
	I1018 09:35:42.642625 1485573 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-275703' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-275703/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-275703' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:35:42.792026 1485573 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:35:42.792052 1485573 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-1274243/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-1274243/.minikube}
	I1018 09:35:42.792070 1485573 ubuntu.go:190] setting up certificates
	I1018 09:35:42.792119 1485573 provision.go:84] configureAuth start
	I1018 09:35:42.792203 1485573 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-275703
	I1018 09:35:42.809331 1485573 provision.go:143] copyHostCerts
	I1018 09:35:42.809424 1485573 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem, removing ...
	I1018 09:35:42.809438 1485573 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem
	I1018 09:35:42.809517 1485573 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem (1675 bytes)
	I1018 09:35:42.809927 1485573 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem, removing ...
	I1018 09:35:42.809944 1485573 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem
	I1018 09:35:42.809990 1485573 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem (1078 bytes)
	I1018 09:35:42.810055 1485573 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem, removing ...
	I1018 09:35:42.810060 1485573 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem
	I1018 09:35:42.810086 1485573 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem (1123 bytes)
	I1018 09:35:42.810140 1485573 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem org=jenkins.auto-275703 san=[127.0.0.1 192.168.76.2 auto-275703 localhost minikube]
	I1018 09:35:43.486206 1485573 provision.go:177] copyRemoteCerts
	I1018 09:35:43.486305 1485573 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:35:43.486371 1485573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-275703
	I1018 09:35:43.508827 1485573 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34916 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/auto-275703/id_rsa Username:docker}
	I1018 09:35:43.615513 1485573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:35:43.633149 1485573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1018 09:35:43.651206 1485573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:35:43.668576 1485573 provision.go:87] duration metric: took 876.426438ms to configureAuth
	I1018 09:35:43.668645 1485573 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:35:43.668842 1485573 config.go:182] Loaded profile config "auto-275703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:35:43.668957 1485573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-275703
	I1018 09:35:43.686177 1485573 main.go:141] libmachine: Using SSH client type: native
	I1018 09:35:43.686485 1485573 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34916 <nil> <nil>}
	I1018 09:35:43.686505 1485573 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:35:43.938830 1485573 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:35:43.938850 1485573 machine.go:96] duration metric: took 1.859195232s to provisionDockerMachine
	I1018 09:35:43.938859 1485573 client.go:171] duration metric: took 9.054172048s to LocalClient.Create
	I1018 09:35:43.938879 1485573 start.go:167] duration metric: took 9.054252539s to libmachine.API.Create "auto-275703"
	I1018 09:35:43.938886 1485573 start.go:293] postStartSetup for "auto-275703" (driver="docker")
	I1018 09:35:43.938895 1485573 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:35:43.938956 1485573 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:35:43.939010 1485573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-275703
	I1018 09:35:43.957273 1485573 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34916 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/auto-275703/id_rsa Username:docker}
	I1018 09:35:44.064121 1485573 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:35:44.067600 1485573 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:35:44.067629 1485573 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:35:44.067641 1485573 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-1274243/.minikube/addons for local assets ...
	I1018 09:35:44.067699 1485573 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-1274243/.minikube/files for local assets ...
	I1018 09:35:44.067797 1485573 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem -> 12760972.pem in /etc/ssl/certs
	I1018 09:35:44.067935 1485573 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:35:44.075577 1485573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem --> /etc/ssl/certs/12760972.pem (1708 bytes)
	I1018 09:35:44.093209 1485573 start.go:296] duration metric: took 154.308961ms for postStartSetup
	I1018 09:35:44.093581 1485573 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-275703
	I1018 09:35:44.110543 1485573 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/config.json ...
	I1018 09:35:44.110825 1485573 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:35:44.110865 1485573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-275703
	I1018 09:35:44.127604 1485573 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34916 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/auto-275703/id_rsa Username:docker}
	I1018 09:35:44.229019 1485573 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:35:44.233806 1485573 start.go:128] duration metric: took 9.352752435s to createHost
	I1018 09:35:44.233828 1485573 start.go:83] releasing machines lock for "auto-275703", held for 9.352882252s
	I1018 09:35:44.233905 1485573 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-275703
	I1018 09:35:44.250245 1485573 ssh_runner.go:195] Run: cat /version.json
	I1018 09:35:44.250307 1485573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-275703
	I1018 09:35:44.250252 1485573 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:35:44.250448 1485573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-275703
	I1018 09:35:44.269547 1485573 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34916 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/auto-275703/id_rsa Username:docker}
	I1018 09:35:44.287594 1485573 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34916 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/auto-275703/id_rsa Username:docker}
	I1018 09:35:44.463203 1485573 ssh_runner.go:195] Run: systemctl --version
	I1018 09:35:44.469525 1485573 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:35:44.505349 1485573 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:35:44.509815 1485573 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:35:44.509885 1485573 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:35:44.539017 1485573 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 09:35:44.539042 1485573 start.go:495] detecting cgroup driver to use...
	I1018 09:35:44.539073 1485573 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 09:35:44.539130 1485573 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:35:44.556298 1485573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:35:44.571181 1485573 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:35:44.571255 1485573 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:35:44.590568 1485573 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:35:44.612530 1485573 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:35:40.878196 1486102 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-593480" ...
	I1018 09:35:40.878276 1486102 cli_runner.go:164] Run: docker start default-k8s-diff-port-593480
	I1018 09:35:41.395025 1486102 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-593480 --format={{.State.Status}}
	I1018 09:35:41.450953 1486102 kic.go:430] container "default-k8s-diff-port-593480" state is running.
	I1018 09:35:41.452014 1486102 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-593480
	I1018 09:35:41.503829 1486102 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/config.json ...
	I1018 09:35:41.504115 1486102 machine.go:93] provisionDockerMachine start ...
	I1018 09:35:41.504179 1486102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-593480
	I1018 09:35:41.565541 1486102 main.go:141] libmachine: Using SSH client type: native
	I1018 09:35:41.565866 1486102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34921 <nil> <nil>}
	I1018 09:35:41.565876 1486102 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:35:41.566539 1486102 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36150->127.0.0.1:34921: read: connection reset by peer
	I1018 09:35:44.723630 1486102 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-593480
	
	I1018 09:35:44.723659 1486102 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-593480"
	I1018 09:35:44.723727 1486102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-593480
	I1018 09:35:44.749611 1486102 main.go:141] libmachine: Using SSH client type: native
	I1018 09:35:44.749913 1486102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34921 <nil> <nil>}
	I1018 09:35:44.749925 1486102 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-593480 && echo "default-k8s-diff-port-593480" | sudo tee /etc/hostname
	I1018 09:35:44.939421 1486102 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-593480
	
	I1018 09:35:44.939582 1486102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-593480
	I1018 09:35:44.968276 1486102 main.go:141] libmachine: Using SSH client type: native
	I1018 09:35:44.968599 1486102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34921 <nil> <nil>}
	I1018 09:35:44.968626 1486102 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-593480' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-593480/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-593480' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:35:45.170969 1486102 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:35:45.171054 1486102 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-1274243/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-1274243/.minikube}
	I1018 09:35:45.171091 1486102 ubuntu.go:190] setting up certificates
	I1018 09:35:45.171136 1486102 provision.go:84] configureAuth start
	I1018 09:35:45.171267 1486102 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-593480
	I1018 09:35:45.225892 1486102 provision.go:143] copyHostCerts
	I1018 09:35:45.225985 1486102 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem, removing ...
	I1018 09:35:45.226005 1486102 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem
	I1018 09:35:45.226091 1486102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem (1078 bytes)
	I1018 09:35:45.226201 1486102 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem, removing ...
	I1018 09:35:45.226208 1486102 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem
	I1018 09:35:45.226236 1486102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem (1123 bytes)
	I1018 09:35:45.226293 1486102 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem, removing ...
	I1018 09:35:45.226299 1486102 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem
	I1018 09:35:45.226322 1486102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem (1675 bytes)
	I1018 09:35:45.226377 1486102 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-593480 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-593480 localhost minikube]
	I1018 09:35:44.767900 1485573 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:35:44.919655 1485573 docker.go:234] disabling docker service ...
	I1018 09:35:44.919727 1485573 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:35:44.960560 1485573 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:35:44.980348 1485573 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:35:45.241874 1485573 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:35:45.484908 1485573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:35:45.500763 1485573 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:35:45.517021 1485573 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:35:45.517086 1485573 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:45.526444 1485573 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 09:35:45.526516 1485573 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:45.535930 1485573 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:45.545142 1485573 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:45.554876 1485573 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:35:45.564531 1485573 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:45.573785 1485573 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:45.591106 1485573 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:45.600335 1485573 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:35:45.607996 1485573 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:35:45.615556 1485573 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:35:45.764580 1485573 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:35:45.923183 1485573 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:35:45.923259 1485573 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:35:45.928159 1485573 start.go:563] Will wait 60s for crictl version
	I1018 09:35:45.928224 1485573 ssh_runner.go:195] Run: which crictl
	I1018 09:35:45.932660 1485573 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:35:45.965585 1485573 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:35:45.965664 1485573 ssh_runner.go:195] Run: crio --version
	I1018 09:35:45.993991 1485573 ssh_runner.go:195] Run: crio --version
	I1018 09:35:46.030411 1485573 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:35:45.887768 1486102 provision.go:177] copyRemoteCerts
	I1018 09:35:45.887891 1486102 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:35:45.887965 1486102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-593480
	I1018 09:35:45.905677 1486102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34921 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/default-k8s-diff-port-593480/id_rsa Username:docker}
	I1018 09:35:46.021000 1486102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:35:46.043989 1486102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1018 09:35:46.064738 1486102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:35:46.090422 1486102 provision.go:87] duration metric: took 919.242204ms to configureAuth
	I1018 09:35:46.090453 1486102 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:35:46.090665 1486102 config.go:182] Loaded profile config "default-k8s-diff-port-593480": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:35:46.090773 1486102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-593480
	I1018 09:35:46.111168 1486102 main.go:141] libmachine: Using SSH client type: native
	I1018 09:35:46.111473 1486102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34921 <nil> <nil>}
	I1018 09:35:46.111489 1486102 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:35:46.529271 1486102 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:35:46.529292 1486102 machine.go:96] duration metric: took 5.025166298s to provisionDockerMachine
	I1018 09:35:46.529302 1486102 start.go:293] postStartSetup for "default-k8s-diff-port-593480" (driver="docker")
	I1018 09:35:46.529313 1486102 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:35:46.529371 1486102 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:35:46.529416 1486102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-593480
	I1018 09:35:46.592009 1486102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34921 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/default-k8s-diff-port-593480/id_rsa Username:docker}
	I1018 09:35:46.697143 1486102 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:35:46.701337 1486102 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:35:46.701362 1486102 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:35:46.701373 1486102 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-1274243/.minikube/addons for local assets ...
	I1018 09:35:46.701428 1486102 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-1274243/.minikube/files for local assets ...
	I1018 09:35:46.701528 1486102 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem -> 12760972.pem in /etc/ssl/certs
	I1018 09:35:46.701670 1486102 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:35:46.710645 1486102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem --> /etc/ssl/certs/12760972.pem (1708 bytes)
	I1018 09:35:46.731657 1486102 start.go:296] duration metric: took 202.339187ms for postStartSetup
	I1018 09:35:46.731813 1486102 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:35:46.731898 1486102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-593480
	I1018 09:35:46.750986 1486102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34921 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/default-k8s-diff-port-593480/id_rsa Username:docker}
	I1018 09:35:46.853146 1486102 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:35:46.858874 1486102 fix.go:56] duration metric: took 6.015746623s for fixHost
	I1018 09:35:46.858896 1486102 start.go:83] releasing machines lock for "default-k8s-diff-port-593480", held for 6.015792473s
	I1018 09:35:46.858972 1486102 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-593480
	I1018 09:35:46.879443 1486102 ssh_runner.go:195] Run: cat /version.json
	I1018 09:35:46.879515 1486102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-593480
	I1018 09:35:46.879814 1486102 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:35:46.879897 1486102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-593480
	I1018 09:35:46.903063 1486102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34921 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/default-k8s-diff-port-593480/id_rsa Username:docker}
	I1018 09:35:46.928174 1486102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34921 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/default-k8s-diff-port-593480/id_rsa Username:docker}
	I1018 09:35:47.020218 1486102 ssh_runner.go:195] Run: systemctl --version
	I1018 09:35:47.111346 1486102 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:35:47.158833 1486102 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:35:47.165593 1486102 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:35:47.165661 1486102 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:35:47.178658 1486102 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 09:35:47.178682 1486102 start.go:495] detecting cgroup driver to use...
	I1018 09:35:47.178716 1486102 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 09:35:47.178762 1486102 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:35:47.198584 1486102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:35:47.215337 1486102 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:35:47.215396 1486102 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:35:47.234320 1486102 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:35:47.248735 1486102 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:35:47.410228 1486102 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:35:47.608474 1486102 docker.go:234] disabling docker service ...
	I1018 09:35:47.608557 1486102 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:35:47.627278 1486102 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:35:47.642721 1486102 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:35:47.791719 1486102 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:35:47.945032 1486102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:35:47.959584 1486102 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:35:47.974728 1486102 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:35:47.974807 1486102 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:47.984458 1486102 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 09:35:47.984525 1486102 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:47.993927 1486102 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:48.003489 1486102 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:48.015790 1486102 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:35:48.026151 1486102 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:48.037092 1486102 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:48.046801 1486102 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:48.057549 1486102 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:35:48.066891 1486102 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:35:48.075819 1486102 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:35:48.293780 1486102 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:35:48.450383 1486102 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:35:48.450496 1486102 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:35:48.454826 1486102 start.go:563] Will wait 60s for crictl version
	I1018 09:35:48.454906 1486102 ssh_runner.go:195] Run: which crictl
	I1018 09:35:48.459369 1486102 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:35:48.492817 1486102 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:35:48.492942 1486102 ssh_runner.go:195] Run: crio --version
	I1018 09:35:48.532830 1486102 ssh_runner.go:195] Run: crio --version
	I1018 09:35:48.579040 1486102 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:35:46.033339 1485573 cli_runner.go:164] Run: docker network inspect auto-275703 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:35:46.062363 1485573 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 09:35:46.065904 1485573 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:35:46.076899 1485573 kubeadm.go:883] updating cluster {Name:auto-275703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-275703 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:35:46.077032 1485573 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:35:46.077091 1485573 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:35:46.119370 1485573 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:35:46.119390 1485573 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:35:46.119441 1485573 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:35:46.157672 1485573 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:35:46.157693 1485573 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:35:46.157700 1485573 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1018 09:35:46.157785 1485573 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-275703 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-275703 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:35:46.157870 1485573 ssh_runner.go:195] Run: crio config
	I1018 09:35:46.233844 1485573 cni.go:84] Creating CNI manager for ""
	I1018 09:35:46.233866 1485573 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:35:46.233888 1485573 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:35:46.233910 1485573 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-275703 NodeName:auto-275703 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:35:46.234036 1485573 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-275703"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:35:46.234104 1485573 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:35:46.243796 1485573 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:35:46.243905 1485573 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:35:46.251465 1485573 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1018 09:35:46.264263 1485573 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:35:46.276928 1485573 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1018 09:35:46.291499 1485573 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:35:46.295373 1485573 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:35:46.306255 1485573 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:35:46.447361 1485573 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:35:46.470644 1485573 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703 for IP: 192.168.76.2
	I1018 09:35:46.470669 1485573 certs.go:195] generating shared ca certs ...
	I1018 09:35:46.470685 1485573 certs.go:227] acquiring lock for ca certs: {Name:mk38b7543cc5a8f0209b20a590824c94ad8d0609 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:35:46.470825 1485573 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.key
	I1018 09:35:46.470877 1485573 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.key
	I1018 09:35:46.470889 1485573 certs.go:257] generating profile certs ...
	I1018 09:35:46.470947 1485573 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/client.key
	I1018 09:35:46.470961 1485573 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/client.crt with IP's: []
	I1018 09:35:47.132208 1485573 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/client.crt ...
	I1018 09:35:47.132242 1485573 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/client.crt: {Name:mkc4fece3eb0c9a2624664e3692305aa02595479 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:35:47.132463 1485573 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/client.key ...
	I1018 09:35:47.132480 1485573 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/client.key: {Name:mk6114ba1da7c76e85cfb7a65b5a952f9d736289 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:35:47.132612 1485573 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/apiserver.key.466c655c
	I1018 09:35:47.132643 1485573 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/apiserver.crt.466c655c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1018 09:35:47.366745 1485573 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/apiserver.crt.466c655c ...
	I1018 09:35:47.366827 1485573 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/apiserver.crt.466c655c: {Name:mk6aed3acea771965a2309baf2d1b151fe996c6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:35:47.367055 1485573 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/apiserver.key.466c655c ...
	I1018 09:35:47.367093 1485573 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/apiserver.key.466c655c: {Name:mk24b2edc779545824c94a396476e5f326938849 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:35:47.367217 1485573 certs.go:382] copying /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/apiserver.crt.466c655c -> /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/apiserver.crt
	I1018 09:35:47.367337 1485573 certs.go:386] copying /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/apiserver.key.466c655c -> /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/apiserver.key
	I1018 09:35:47.367428 1485573 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/proxy-client.key
	I1018 09:35:47.367460 1485573 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/proxy-client.crt with IP's: []
	I1018 09:35:48.288703 1485573 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/proxy-client.crt ...
	I1018 09:35:48.288776 1485573 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/proxy-client.crt: {Name:mk043e5152d1f5c945198728a86358f29b9fe528 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:35:48.288995 1485573 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/proxy-client.key ...
	I1018 09:35:48.289032 1485573 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/proxy-client.key: {Name:mkc5beb28c916e50b753161b57914e101a3a05b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:35:48.289253 1485573 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097.pem (1338 bytes)
	W1018 09:35:48.289322 1485573 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097_empty.pem, impossibly tiny 0 bytes
	I1018 09:35:48.289348 1485573 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:35:48.289390 1485573 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:35:48.289447 1485573 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:35:48.289489 1485573 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem (1675 bytes)
	I1018 09:35:48.289565 1485573 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem (1708 bytes)
	I1018 09:35:48.290151 1485573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:35:48.312533 1485573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 09:35:48.332735 1485573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:35:48.352550 1485573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:35:48.377965 1485573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1018 09:35:48.401875 1485573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 09:35:48.425433 1485573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:35:48.445818 1485573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 09:35:48.477771 1485573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem --> /usr/share/ca-certificates/12760972.pem (1708 bytes)
	I1018 09:35:48.499285 1485573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:35:48.517411 1485573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097.pem --> /usr/share/ca-certificates/1276097.pem (1338 bytes)
	I1018 09:35:48.535701 1485573 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:35:48.548782 1485573 ssh_runner.go:195] Run: openssl version
	I1018 09:35:48.555452 1485573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12760972.pem && ln -fs /usr/share/ca-certificates/12760972.pem /etc/ssl/certs/12760972.pem"
	I1018 09:35:48.564189 1485573 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12760972.pem
	I1018 09:35:48.568749 1485573 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 08:36 /usr/share/ca-certificates/12760972.pem
	I1018 09:35:48.568857 1485573 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12760972.pem
	I1018 09:35:48.621739 1485573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12760972.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:35:48.630761 1485573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:35:48.643972 1485573 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:35:48.648462 1485573 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:35:48.648526 1485573 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:35:48.693069 1485573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:35:48.701325 1485573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1276097.pem && ln -fs /usr/share/ca-certificates/1276097.pem /etc/ssl/certs/1276097.pem"
	I1018 09:35:48.710577 1485573 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1276097.pem
	I1018 09:35:48.714414 1485573 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 08:36 /usr/share/ca-certificates/1276097.pem
	I1018 09:35:48.714498 1485573 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1276097.pem
	I1018 09:35:48.758552 1485573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1276097.pem /etc/ssl/certs/51391683.0"
	I1018 09:35:48.769735 1485573 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:35:48.775226 1485573 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 09:35:48.775275 1485573 kubeadm.go:400] StartCluster: {Name:auto-275703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-275703 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:35:48.775358 1485573 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:35:48.775414 1485573 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:35:48.820320 1485573 cri.go:89] found id: ""
	I1018 09:35:48.820397 1485573 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:35:48.831316 1485573 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 09:35:48.839703 1485573 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 09:35:48.839769 1485573 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 09:35:48.851357 1485573 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 09:35:48.851376 1485573 kubeadm.go:157] found existing configuration files:
	
	I1018 09:35:48.851431 1485573 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 09:35:48.860105 1485573 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 09:35:48.860171 1485573 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 09:35:48.867428 1485573 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 09:35:48.875889 1485573 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 09:35:48.875950 1485573 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 09:35:48.905267 1485573 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 09:35:48.924480 1485573 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 09:35:48.924564 1485573 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 09:35:48.936569 1485573 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 09:35:48.958253 1485573 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 09:35:48.958354 1485573 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 09:35:48.984053 1485573 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 09:35:49.040738 1485573 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 09:35:49.041146 1485573 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 09:35:49.084198 1485573 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 09:35:49.084332 1485573 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 09:35:49.084423 1485573 kubeadm.go:318] OS: Linux
	I1018 09:35:49.084532 1485573 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 09:35:49.084767 1485573 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1018 09:35:49.084829 1485573 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 09:35:49.084883 1485573 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 09:35:49.084964 1485573 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 09:35:49.085021 1485573 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 09:35:49.085071 1485573 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 09:35:49.085125 1485573 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 09:35:49.085177 1485573 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1018 09:35:49.218479 1485573 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 09:35:49.218648 1485573 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 09:35:49.218826 1485573 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 09:35:49.236275 1485573 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 09:35:49.242323 1485573 out.go:252]   - Generating certificates and keys ...
	I1018 09:35:49.242468 1485573 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 09:35:49.242560 1485573 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 09:35:48.581995 1486102 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-593480 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:35:48.596882 1486102 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 09:35:48.600625 1486102 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:35:48.609709 1486102 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-593480 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-593480 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:35:48.609842 1486102 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:35:48.609907 1486102 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:35:48.653903 1486102 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:35:48.653931 1486102 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:35:48.653989 1486102 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:35:48.689641 1486102 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:35:48.689665 1486102 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:35:48.689672 1486102 kubeadm.go:934] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1018 09:35:48.689769 1486102 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-593480 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-593480 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:35:48.689853 1486102 ssh_runner.go:195] Run: crio config
	I1018 09:35:48.764780 1486102 cni.go:84] Creating CNI manager for ""
	I1018 09:35:48.764839 1486102 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:35:48.764882 1486102 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:35:48.764942 1486102 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-593480 NodeName:default-k8s-diff-port-593480 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:35:48.765109 1486102 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-593480"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:35:48.765212 1486102 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:35:48.773966 1486102 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:35:48.774129 1486102 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:35:48.782430 1486102 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1018 09:35:48.795343 1486102 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:35:48.808717 1486102 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1018 09:35:48.826902 1486102 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:35:48.832106 1486102 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:35:48.843135 1486102 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:35:49.008030 1486102 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:35:49.026120 1486102 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480 for IP: 192.168.85.2
	I1018 09:35:49.026138 1486102 certs.go:195] generating shared ca certs ...
	I1018 09:35:49.026154 1486102 certs.go:227] acquiring lock for ca certs: {Name:mk38b7543cc5a8f0209b20a590824c94ad8d0609 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:35:49.026291 1486102 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.key
	I1018 09:35:49.026331 1486102 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.key
	I1018 09:35:49.026337 1486102 certs.go:257] generating profile certs ...
	I1018 09:35:49.026418 1486102 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/client.key
	I1018 09:35:49.026482 1486102 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/apiserver.key.3ec3eca5
	I1018 09:35:49.026519 1486102 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/proxy-client.key
	I1018 09:35:49.026665 1486102 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097.pem (1338 bytes)
	W1018 09:35:49.026693 1486102 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097_empty.pem, impossibly tiny 0 bytes
	I1018 09:35:49.026701 1486102 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:35:49.026726 1486102 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:35:49.026747 1486102 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:35:49.026769 1486102 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem (1675 bytes)
	I1018 09:35:49.026820 1486102 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem (1708 bytes)
	I1018 09:35:49.027423 1486102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:35:49.067046 1486102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 09:35:49.097853 1486102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:35:49.124692 1486102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:35:49.141971 1486102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1018 09:35:49.159544 1486102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 09:35:49.185054 1486102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:35:49.232707 1486102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 09:35:49.305614 1486102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem --> /usr/share/ca-certificates/12760972.pem (1708 bytes)
	I1018 09:35:49.343654 1486102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:35:49.363811 1486102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097.pem --> /usr/share/ca-certificates/1276097.pem (1338 bytes)
	I1018 09:35:49.381044 1486102 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:35:49.393311 1486102 ssh_runner.go:195] Run: openssl version
	I1018 09:35:49.399541 1486102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12760972.pem && ln -fs /usr/share/ca-certificates/12760972.pem /etc/ssl/certs/12760972.pem"
	I1018 09:35:49.407431 1486102 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12760972.pem
	I1018 09:35:49.410912 1486102 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 08:36 /usr/share/ca-certificates/12760972.pem
	I1018 09:35:49.411018 1486102 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12760972.pem
	I1018 09:35:49.462256 1486102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12760972.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:35:49.470075 1486102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:35:49.478031 1486102 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:35:49.485441 1486102 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:35:49.485554 1486102 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:35:49.526421 1486102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:35:49.535143 1486102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1276097.pem && ln -fs /usr/share/ca-certificates/1276097.pem /etc/ssl/certs/1276097.pem"
	I1018 09:35:49.543584 1486102 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1276097.pem
	I1018 09:35:49.548149 1486102 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 08:36 /usr/share/ca-certificates/1276097.pem
	I1018 09:35:49.548307 1486102 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1276097.pem
	I1018 09:35:49.593000 1486102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1276097.pem /etc/ssl/certs/51391683.0"
	I1018 09:35:49.601462 1486102 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:35:49.605550 1486102 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 09:35:49.650842 1486102 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 09:35:49.701224 1486102 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 09:35:49.743116 1486102 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 09:35:49.805091 1486102 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 09:35:49.892887 1486102 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 09:35:49.980910 1486102 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-593480 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-593480 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:35:49.981053 1486102 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:35:49.981165 1486102 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:35:50.089138 1486102 cri.go:89] found id: "f25f31e0de7b14e6ec30c9543448ad6c36163463aa5bb218aac0f99a95ccfe92"
	I1018 09:35:50.089174 1486102 cri.go:89] found id: "733c7cf0be6cd400ab00223c34a62e45c087d4073aeca1345162c44182d78944"
	I1018 09:35:50.089180 1486102 cri.go:89] found id: ""
	I1018 09:35:50.089265 1486102 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 09:35:50.118623 1486102 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:35:50Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:35:50.118741 1486102 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:35:50.147855 1486102 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 09:35:50.147878 1486102 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 09:35:50.147955 1486102 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 09:35:50.169883 1486102 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:35:50.170362 1486102 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-593480" does not appear in /home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 09:35:50.170515 1486102 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-1274243/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-593480" cluster setting kubeconfig missing "default-k8s-diff-port-593480" context setting]
	I1018 09:35:50.170870 1486102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/kubeconfig: {Name:mk4be33efdd3c492a070116d2c0b6f8b234870aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:35:50.172280 1486102 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 09:35:50.220091 1486102 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1018 09:35:50.220123 1486102 kubeadm.go:601] duration metric: took 72.238577ms to restartPrimaryControlPlane
	I1018 09:35:50.220132 1486102 kubeadm.go:402] duration metric: took 239.242577ms to StartCluster
	I1018 09:35:50.220147 1486102 settings.go:142] acquiring lock: {Name:mk4240955247baece9b9f2d9a5a8e189cb089184 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:35:50.220247 1486102 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 09:35:50.222642 1486102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/kubeconfig: {Name:mk4be33efdd3c492a070116d2c0b6f8b234870aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:35:50.223121 1486102 config.go:182] Loaded profile config "default-k8s-diff-port-593480": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:35:50.223181 1486102 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:35:50.223234 1486102 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:35:50.223482 1486102 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-593480"
	I1018 09:35:50.223506 1486102 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-593480"
	W1018 09:35:50.223518 1486102 addons.go:247] addon storage-provisioner should already be in state true
	I1018 09:35:50.223552 1486102 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-593480"
	I1018 09:35:50.223566 1486102 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-593480"
	W1018 09:35:50.223571 1486102 addons.go:247] addon dashboard should already be in state true
	I1018 09:35:50.223588 1486102 host.go:66] Checking if "default-k8s-diff-port-593480" exists ...
	I1018 09:35:50.224087 1486102 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-593480 --format={{.State.Status}}
	I1018 09:35:50.224256 1486102 host.go:66] Checking if "default-k8s-diff-port-593480" exists ...
	I1018 09:35:50.224741 1486102 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-593480 --format={{.State.Status}}
	I1018 09:35:50.226150 1486102 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-593480"
	I1018 09:35:50.226207 1486102 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-593480"
	I1018 09:35:50.226508 1486102 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-593480 --format={{.State.Status}}
	I1018 09:35:50.229004 1486102 out.go:179] * Verifying Kubernetes components...
	I1018 09:35:50.232431 1486102 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:35:50.283049 1486102 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-593480"
	W1018 09:35:50.283073 1486102 addons.go:247] addon default-storageclass should already be in state true
	I1018 09:35:50.283099 1486102 host.go:66] Checking if "default-k8s-diff-port-593480" exists ...
	I1018 09:35:50.283530 1486102 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-593480 --format={{.State.Status}}
	I1018 09:35:50.285410 1486102 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:35:50.288772 1486102 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 09:35:50.288875 1486102 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:35:50.288885 1486102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:35:50.288953 1486102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-593480
	I1018 09:35:50.299890 1486102 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 09:35:50.302806 1486102 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 09:35:50.302967 1486102 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 09:35:50.303046 1486102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-593480
	I1018 09:35:50.324163 1486102 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:35:50.324188 1486102 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:35:50.324256 1486102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-593480
	I1018 09:35:50.341272 1486102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34921 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/default-k8s-diff-port-593480/id_rsa Username:docker}
	I1018 09:35:50.349534 1486102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34921 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/default-k8s-diff-port-593480/id_rsa Username:docker}
	I1018 09:35:50.371418 1486102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34921 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/default-k8s-diff-port-593480/id_rsa Username:docker}
	I1018 09:35:49.808474 1485573 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 09:35:50.324016 1485573 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 09:35:50.557598 1485573 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 09:35:50.910598 1485573 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 09:35:51.881876 1485573 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 09:35:51.882312 1485573 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [auto-275703 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1018 09:35:52.164643 1485573 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 09:35:52.165094 1485573 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [auto-275703 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1018 09:35:53.366851 1485573 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 09:35:50.634608 1486102 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:35:50.712321 1486102 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-593480" to be "Ready" ...
	I1018 09:35:50.753379 1486102 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 09:35:50.753399 1486102 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 09:35:50.794207 1486102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:35:50.831160 1486102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:35:50.856681 1486102 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 09:35:50.856754 1486102 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 09:35:50.982081 1486102 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 09:35:50.982153 1486102 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 09:35:51.133162 1486102 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 09:35:51.133236 1486102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 09:35:51.238858 1486102 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 09:35:51.238932 1486102 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 09:35:51.363833 1486102 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 09:35:51.363921 1486102 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 09:35:51.432456 1486102 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 09:35:51.432534 1486102 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 09:35:51.482840 1486102 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 09:35:51.482917 1486102 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 09:35:51.553864 1486102 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:35:51.553939 1486102 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 09:35:51.598936 1486102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:35:55.511368 1485573 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 09:35:55.686518 1485573 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 09:35:55.686600 1485573 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 09:35:55.850898 1485573 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 09:35:56.612198 1485573 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 09:35:57.239264 1485573 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 09:35:57.341424 1485573 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 09:35:57.600192 1485573 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 09:35:57.600300 1485573 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 09:35:57.604236 1485573 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 09:35:57.607913 1485573 out.go:252]   - Booting up control plane ...
	I1018 09:35:57.608027 1485573 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 09:35:57.608117 1485573 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 09:35:57.610568 1485573 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 09:35:57.649863 1485573 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 09:35:57.649988 1485573 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 09:35:57.669807 1485573 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 09:35:57.669911 1485573 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 09:35:57.669953 1485573 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 09:35:57.923294 1485573 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 09:35:57.923419 1485573 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 09:35:57.650038 1486102 node_ready.go:49] node "default-k8s-diff-port-593480" is "Ready"
	I1018 09:35:57.650069 1486102 node_ready.go:38] duration metric: took 6.937669344s for node "default-k8s-diff-port-593480" to be "Ready" ...
	I1018 09:35:57.650082 1486102 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:35:57.650140 1486102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:35:58.126567 1486102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.33227718s)
	I1018 09:36:00.902855 1486102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.071619708s)
	I1018 09:36:00.902976 1486102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.303944964s)
	I1018 09:36:00.903100 1486102 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.252945029s)
	I1018 09:36:00.903120 1486102 api_server.go:72] duration metric: took 10.679912361s to wait for apiserver process to appear ...
	I1018 09:36:00.903126 1486102 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:36:00.903142 1486102 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1018 09:36:00.906363 1486102 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-593480 addons enable metrics-server
	
	I1018 09:36:00.909457 1486102 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1018 09:36:00.444249 1485573 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.519027818s
	I1018 09:36:00.447045 1485573 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 09:36:00.447152 1485573 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1018 09:36:00.447637 1485573 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 09:36:00.448853 1485573 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 09:36:04.570713 1485573 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.121566906s
	I1018 09:36:00.913335 1486102 addons.go:514] duration metric: took 10.690090433s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1018 09:36:00.919865 1486102 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1018 09:36:00.921242 1486102 api_server.go:141] control plane version: v1.34.1
	I1018 09:36:00.921270 1486102 api_server.go:131] duration metric: took 18.137472ms to wait for apiserver health ...
	I1018 09:36:00.921281 1486102 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:36:00.942050 1486102 system_pods.go:59] 8 kube-system pods found
	I1018 09:36:00.942091 1486102 system_pods.go:61] "coredns-66bc5c9577-lxwgf" [7dfe7cc5-827f-4a29-932a-943c05bc729e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:36:00.942101 1486102 system_pods.go:61] "etcd-default-k8s-diff-port-593480" [9c3866d1-8a94-420d-ac51-55bcf2f955e1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:36:00.942114 1486102 system_pods.go:61] "kindnet-ptbw6" [5fa3779f-2d5f-4303-8f6b-af5ae96f1fae] Running
	I1018 09:36:00.942121 1486102 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-593480" [35443851-03a6-43b4-b827-6dcd89b14052] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:36:00.942129 1486102 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-593480" [9e1e3689-cdbf-48cf-8c1e-4a55e905811d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:36:00.942137 1486102 system_pods.go:61] "kube-proxy-lz9p5" [df6ea9c5-3f27-4e58-be1b-c6f47b71aa63] Running
	I1018 09:36:00.942144 1486102 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-593480" [ab2397f7-d4be-4bc7-98eb-b9ddb0e6a9a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:36:00.942154 1486102 system_pods.go:61] "storage-provisioner" [da9f578c-74b8-40c2-a810-245c70e07eae] Running
	I1018 09:36:00.942160 1486102 system_pods.go:74] duration metric: took 20.874216ms to wait for pod list to return data ...
	I1018 09:36:00.942174 1486102 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:36:00.968799 1486102 default_sa.go:45] found service account: "default"
	I1018 09:36:00.968832 1486102 default_sa.go:55] duration metric: took 26.651147ms for default service account to be created ...
	I1018 09:36:00.968843 1486102 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 09:36:00.971717 1486102 system_pods.go:86] 8 kube-system pods found
	I1018 09:36:00.971754 1486102 system_pods.go:89] "coredns-66bc5c9577-lxwgf" [7dfe7cc5-827f-4a29-932a-943c05bc729e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:36:00.971764 1486102 system_pods.go:89] "etcd-default-k8s-diff-port-593480" [9c3866d1-8a94-420d-ac51-55bcf2f955e1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:36:00.971770 1486102 system_pods.go:89] "kindnet-ptbw6" [5fa3779f-2d5f-4303-8f6b-af5ae96f1fae] Running
	I1018 09:36:00.971776 1486102 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-593480" [35443851-03a6-43b4-b827-6dcd89b14052] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:36:00.971785 1486102 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-593480" [9e1e3689-cdbf-48cf-8c1e-4a55e905811d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:36:00.971790 1486102 system_pods.go:89] "kube-proxy-lz9p5" [df6ea9c5-3f27-4e58-be1b-c6f47b71aa63] Running
	I1018 09:36:00.971798 1486102 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-593480" [ab2397f7-d4be-4bc7-98eb-b9ddb0e6a9a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:36:00.971802 1486102 system_pods.go:89] "storage-provisioner" [da9f578c-74b8-40c2-a810-245c70e07eae] Running
	I1018 09:36:00.971808 1486102 system_pods.go:126] duration metric: took 2.960402ms to wait for k8s-apps to be running ...
	I1018 09:36:00.971821 1486102 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 09:36:00.971892 1486102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:36:01.002727 1486102 system_svc.go:56] duration metric: took 30.895336ms WaitForService to wait for kubelet
	I1018 09:36:01.002762 1486102 kubeadm.go:586] duration metric: took 10.779553144s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:36:01.002783 1486102 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:36:01.011023 1486102 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 09:36:01.011060 1486102 node_conditions.go:123] node cpu capacity is 2
	I1018 09:36:01.011072 1486102 node_conditions.go:105] duration metric: took 8.2842ms to run NodePressure ...
	I1018 09:36:01.011085 1486102 start.go:241] waiting for startup goroutines ...
	I1018 09:36:01.011093 1486102 start.go:246] waiting for cluster config update ...
	I1018 09:36:01.011103 1486102 start.go:255] writing updated cluster config ...
	I1018 09:36:01.011387 1486102 ssh_runner.go:195] Run: rm -f paused
	I1018 09:36:01.022750 1486102 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:36:01.029988 1486102 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lxwgf" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 09:36:03.037812 1486102 pod_ready.go:104] pod "coredns-66bc5c9577-lxwgf" is not "Ready", error: <nil>
	I1018 09:36:06.762589 1485573 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 6.313214106s
	I1018 09:36:08.953211 1485573 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 8.505133315s
	I1018 09:36:08.983429 1485573 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 09:36:09.005550 1485573 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 09:36:09.026478 1485573 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 09:36:09.026944 1485573 kubeadm.go:318] [mark-control-plane] Marking the node auto-275703 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 09:36:09.050998 1485573 kubeadm.go:318] [bootstrap-token] Using token: c5woyt.2467er8qsdbu8ipv
	I1018 09:36:09.054136 1485573 out.go:252]   - Configuring RBAC rules ...
	I1018 09:36:09.054272 1485573 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 09:36:09.061541 1485573 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 09:36:09.071665 1485573 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 09:36:09.081135 1485573 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 09:36:09.086125 1485573 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 09:36:09.091072 1485573 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 09:36:09.361650 1485573 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 09:36:09.815924 1485573 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 09:36:10.372359 1485573 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 09:36:10.373494 1485573 kubeadm.go:318] 
	I1018 09:36:10.373567 1485573 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 09:36:10.373574 1485573 kubeadm.go:318] 
	I1018 09:36:10.373654 1485573 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 09:36:10.373658 1485573 kubeadm.go:318] 
	I1018 09:36:10.373685 1485573 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 09:36:10.373747 1485573 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 09:36:10.373805 1485573 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 09:36:10.373810 1485573 kubeadm.go:318] 
	I1018 09:36:10.373867 1485573 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 09:36:10.373871 1485573 kubeadm.go:318] 
	I1018 09:36:10.373921 1485573 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 09:36:10.373925 1485573 kubeadm.go:318] 
	I1018 09:36:10.373979 1485573 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 09:36:10.374057 1485573 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 09:36:10.374129 1485573 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 09:36:10.374134 1485573 kubeadm.go:318] 
	I1018 09:36:10.374222 1485573 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 09:36:10.374302 1485573 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 09:36:10.374307 1485573 kubeadm.go:318] 
	I1018 09:36:10.374394 1485573 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token c5woyt.2467er8qsdbu8ipv \
	I1018 09:36:10.374502 1485573 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:96f90b4cb8c73febdc9c05bdeac126d36ffd51807995d3aad044a9ec3e33b953 \
	I1018 09:36:10.374524 1485573 kubeadm.go:318] 	--control-plane 
	I1018 09:36:10.374528 1485573 kubeadm.go:318] 
	I1018 09:36:10.374616 1485573 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 09:36:10.374622 1485573 kubeadm.go:318] 
	I1018 09:36:10.374708 1485573 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token c5woyt.2467er8qsdbu8ipv \
	I1018 09:36:10.374815 1485573 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:96f90b4cb8c73febdc9c05bdeac126d36ffd51807995d3aad044a9ec3e33b953 
	I1018 09:36:10.380564 1485573 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1018 09:36:10.380806 1485573 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1018 09:36:10.380915 1485573 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 09:36:10.380933 1485573 cni.go:84] Creating CNI manager for ""
	I1018 09:36:10.380949 1485573 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:36:10.384305 1485573 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1018 09:36:05.534992 1486102 pod_ready.go:104] pod "coredns-66bc5c9577-lxwgf" is not "Ready", error: <nil>
	W1018 09:36:07.535173 1486102 pod_ready.go:104] pod "coredns-66bc5c9577-lxwgf" is not "Ready", error: <nil>
	W1018 09:36:09.541476 1486102 pod_ready.go:104] pod "coredns-66bc5c9577-lxwgf" is not "Ready", error: <nil>
	I1018 09:36:10.388127 1485573 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 09:36:10.399446 1485573 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 09:36:10.399466 1485573 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 09:36:10.433203 1485573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 09:36:10.928493 1485573 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 09:36:10.928812 1485573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:36:10.928930 1485573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-275703 minikube.k8s.io/updated_at=2025_10_18T09_36_10_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820 minikube.k8s.io/name=auto-275703 minikube.k8s.io/primary=true
	I1018 09:36:11.440359 1485573 ops.go:34] apiserver oom_adj: -16
	I1018 09:36:11.440472 1485573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:36:11.940569 1485573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:36:12.441558 1485573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:36:12.940595 1485573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:36:13.441053 1485573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:36:13.940583 1485573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:36:14.441292 1485573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:36:14.631272 1485573 kubeadm.go:1113] duration metric: took 3.702526754s to wait for elevateKubeSystemPrivileges
	I1018 09:36:14.631368 1485573 kubeadm.go:402] duration metric: took 25.856085084s to StartCluster
	I1018 09:36:14.631400 1485573 settings.go:142] acquiring lock: {Name:mk4240955247baece9b9f2d9a5a8e189cb089184 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:36:14.631487 1485573 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 09:36:14.632580 1485573 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/kubeconfig: {Name:mk4be33efdd3c492a070116d2c0b6f8b234870aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:36:14.632864 1485573 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:36:14.633002 1485573 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 09:36:14.633308 1485573 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:36:14.633395 1485573 addons.go:69] Setting storage-provisioner=true in profile "auto-275703"
	I1018 09:36:14.633412 1485573 addons.go:238] Setting addon storage-provisioner=true in "auto-275703"
	I1018 09:36:14.633435 1485573 host.go:66] Checking if "auto-275703" exists ...
	I1018 09:36:14.633991 1485573 cli_runner.go:164] Run: docker container inspect auto-275703 --format={{.State.Status}}
	I1018 09:36:14.634286 1485573 config.go:182] Loaded profile config "auto-275703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:36:14.634418 1485573 addons.go:69] Setting default-storageclass=true in profile "auto-275703"
	I1018 09:36:14.634451 1485573 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-275703"
	I1018 09:36:14.634777 1485573 cli_runner.go:164] Run: docker container inspect auto-275703 --format={{.State.Status}}
	I1018 09:36:14.638634 1485573 out.go:179] * Verifying Kubernetes components...
	I1018 09:36:14.642374 1485573 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:36:14.672019 1485573 addons.go:238] Setting addon default-storageclass=true in "auto-275703"
	I1018 09:36:14.672054 1485573 host.go:66] Checking if "auto-275703" exists ...
	I1018 09:36:14.672455 1485573 cli_runner.go:164] Run: docker container inspect auto-275703 --format={{.State.Status}}
	I1018 09:36:14.689303 1485573 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1018 09:36:12.036542 1486102 pod_ready.go:104] pod "coredns-66bc5c9577-lxwgf" is not "Ready", error: <nil>
	W1018 09:36:14.037408 1486102 pod_ready.go:104] pod "coredns-66bc5c9577-lxwgf" is not "Ready", error: <nil>
	I1018 09:36:14.710275 1485573 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:36:14.710293 1485573 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:36:14.710354 1485573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-275703
	I1018 09:36:14.710546 1485573 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:36:14.710555 1485573 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:36:14.710609 1485573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-275703
	I1018 09:36:14.747012 1485573 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34916 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/auto-275703/id_rsa Username:docker}
	I1018 09:36:14.758438 1485573 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34916 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/auto-275703/id_rsa Username:docker}
	I1018 09:36:15.173737 1485573 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:36:15.282227 1485573 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:36:15.400709 1485573 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 09:36:15.400822 1485573 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:36:16.321101 1485573 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.038840499s)
	I1018 09:36:16.321898 1485573 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1018 09:36:16.322105 1485573 node_ready.go:35] waiting up to 15m0s for node "auto-275703" to be "Ready" ...
	I1018 09:36:16.322864 1485573 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.149100652s)
	I1018 09:36:16.406369 1485573 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1018 09:36:16.411616 1485573 addons.go:514] duration metric: took 1.77828891s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 09:36:16.826956 1485573 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-275703" context rescaled to 1 replicas
	W1018 09:36:18.325461 1485573 node_ready.go:57] node "auto-275703" has "Ready":"False" status (will retry)
	W1018 09:36:16.042868 1486102 pod_ready.go:104] pod "coredns-66bc5c9577-lxwgf" is not "Ready", error: <nil>
	W1018 09:36:18.536398 1486102 pod_ready.go:104] pod "coredns-66bc5c9577-lxwgf" is not "Ready", error: <nil>
	W1018 09:36:20.326235 1485573 node_ready.go:57] node "auto-275703" has "Ready":"False" status (will retry)
	W1018 09:36:22.825874 1485573 node_ready.go:57] node "auto-275703" has "Ready":"False" status (will retry)
	W1018 09:36:21.035140 1486102 pod_ready.go:104] pod "coredns-66bc5c9577-lxwgf" is not "Ready", error: <nil>
	W1018 09:36:23.535798 1486102 pod_ready.go:104] pod "coredns-66bc5c9577-lxwgf" is not "Ready", error: <nil>
	W1018 09:36:25.325505 1485573 node_ready.go:57] node "auto-275703" has "Ready":"False" status (will retry)
	W1018 09:36:27.825388 1485573 node_ready.go:57] node "auto-275703" has "Ready":"False" status (will retry)
	W1018 09:36:25.536067 1486102 pod_ready.go:104] pod "coredns-66bc5c9577-lxwgf" is not "Ready", error: <nil>
	W1018 09:36:28.035914 1486102 pod_ready.go:104] pod "coredns-66bc5c9577-lxwgf" is not "Ready", error: <nil>
	W1018 09:36:30.037485 1486102 pod_ready.go:104] pod "coredns-66bc5c9577-lxwgf" is not "Ready", error: <nil>
	W1018 09:36:30.325353 1485573 node_ready.go:57] node "auto-275703" has "Ready":"False" status (will retry)
	W1018 09:36:32.825103 1485573 node_ready.go:57] node "auto-275703" has "Ready":"False" status (will retry)
	W1018 09:36:32.535190 1486102 pod_ready.go:104] pod "coredns-66bc5c9577-lxwgf" is not "Ready", error: <nil>
	W1018 09:36:34.537062 1486102 pod_ready.go:104] pod "coredns-66bc5c9577-lxwgf" is not "Ready", error: <nil>
	W1018 09:36:34.825213 1485573 node_ready.go:57] node "auto-275703" has "Ready":"False" status (will retry)
	W1018 09:36:37.324968 1485573 node_ready.go:57] node "auto-275703" has "Ready":"False" status (will retry)
	W1018 09:36:39.325119 1485573 node_ready.go:57] node "auto-275703" has "Ready":"False" status (will retry)
	W1018 09:36:37.036358 1486102 pod_ready.go:104] pod "coredns-66bc5c9577-lxwgf" is not "Ready", error: <nil>
	W1018 09:36:39.535359 1486102 pod_ready.go:104] pod "coredns-66bc5c9577-lxwgf" is not "Ready", error: <nil>
	I1018 09:36:41.035546 1486102 pod_ready.go:94] pod "coredns-66bc5c9577-lxwgf" is "Ready"
	I1018 09:36:41.035570 1486102 pod_ready.go:86] duration metric: took 40.005550821s for pod "coredns-66bc5c9577-lxwgf" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:36:41.039128 1486102 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-593480" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:36:41.044774 1486102 pod_ready.go:94] pod "etcd-default-k8s-diff-port-593480" is "Ready"
	I1018 09:36:41.044798 1486102 pod_ready.go:86] duration metric: took 5.648835ms for pod "etcd-default-k8s-diff-port-593480" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:36:41.047041 1486102 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-593480" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:36:41.054309 1486102 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-593480" is "Ready"
	I1018 09:36:41.054334 1486102 pod_ready.go:86] duration metric: took 7.226433ms for pod "kube-apiserver-default-k8s-diff-port-593480" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:36:41.056530 1486102 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-593480" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:36:41.233768 1486102 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-593480" is "Ready"
	I1018 09:36:41.233855 1486102 pod_ready.go:86] duration metric: took 177.302903ms for pod "kube-controller-manager-default-k8s-diff-port-593480" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:36:41.434101 1486102 pod_ready.go:83] waiting for pod "kube-proxy-lz9p5" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:36:41.834029 1486102 pod_ready.go:94] pod "kube-proxy-lz9p5" is "Ready"
	I1018 09:36:41.834066 1486102 pod_ready.go:86] duration metric: took 399.937557ms for pod "kube-proxy-lz9p5" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:36:42.034591 1486102 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-593480" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:36:42.434257 1486102 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-593480" is "Ready"
	I1018 09:36:42.434282 1486102 pod_ready.go:86] duration metric: took 399.656679ms for pod "kube-scheduler-default-k8s-diff-port-593480" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:36:42.434295 1486102 pod_ready.go:40] duration metric: took 41.411509734s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:36:42.490888 1486102 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 09:36:42.494220 1486102 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-593480" cluster and "default" namespace by default
	W1018 09:36:41.825190 1485573 node_ready.go:57] node "auto-275703" has "Ready":"False" status (will retry)
	W1018 09:36:43.826909 1485573 node_ready.go:57] node "auto-275703" has "Ready":"False" status (will retry)
	W1018 09:36:46.325026 1485573 node_ready.go:57] node "auto-275703" has "Ready":"False" status (will retry)
	W1018 09:36:48.325210 1485573 node_ready.go:57] node "auto-275703" has "Ready":"False" status (will retry)
	W1018 09:36:50.824939 1485573 node_ready.go:57] node "auto-275703" has "Ready":"False" status (will retry)
	W1018 09:36:52.825563 1485573 node_ready.go:57] node "auto-275703" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 18 09:36:36 default-k8s-diff-port-593480 crio[650]: time="2025-10-18T09:36:36.344870998Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:36:36 default-k8s-diff-port-593480 crio[650]: time="2025-10-18T09:36:36.352008555Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:36:36 default-k8s-diff-port-593480 crio[650]: time="2025-10-18T09:36:36.3525652Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:36:36 default-k8s-diff-port-593480 crio[650]: time="2025-10-18T09:36:36.369481159Z" level=info msg="Created container c86fad3e7dc491c95dc3fffaf85c7f92e7712435dfa9dc8dc3aad44d435ee3b6: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f47cm/dashboard-metrics-scraper" id=c88f2faa-3e41-4153-8914-a18d63994776 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:36:36 default-k8s-diff-port-593480 crio[650]: time="2025-10-18T09:36:36.370641028Z" level=info msg="Starting container: c86fad3e7dc491c95dc3fffaf85c7f92e7712435dfa9dc8dc3aad44d435ee3b6" id=85c6660b-d7df-4f8a-8bcb-75facb612c25 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:36:36 default-k8s-diff-port-593480 crio[650]: time="2025-10-18T09:36:36.373412627Z" level=info msg="Started container" PID=1643 containerID=c86fad3e7dc491c95dc3fffaf85c7f92e7712435dfa9dc8dc3aad44d435ee3b6 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f47cm/dashboard-metrics-scraper id=85c6660b-d7df-4f8a-8bcb-75facb612c25 name=/runtime.v1.RuntimeService/StartContainer sandboxID=06ce6b13e3aee14b3382a8fbc4e4759a9c4dadba8ab6952b80f633fac4f0a880
	Oct 18 09:36:36 default-k8s-diff-port-593480 conmon[1641]: conmon c86fad3e7dc491c95dc3 <ninfo>: container 1643 exited with status 1
	Oct 18 09:36:36 default-k8s-diff-port-593480 crio[650]: time="2025-10-18T09:36:36.706984609Z" level=info msg="Removing container: 043b18b289d7f25a1f8378eaab0467b83af90a4ac0b616437ecb2d0bc1397924" id=4e4056a6-e726-4002-bf6e-7fc1d2855f42 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:36:36 default-k8s-diff-port-593480 crio[650]: time="2025-10-18T09:36:36.719697444Z" level=info msg="Error loading conmon cgroup of container 043b18b289d7f25a1f8378eaab0467b83af90a4ac0b616437ecb2d0bc1397924: cgroup deleted" id=4e4056a6-e726-4002-bf6e-7fc1d2855f42 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:36:36 default-k8s-diff-port-593480 crio[650]: time="2025-10-18T09:36:36.726015717Z" level=info msg="Removed container 043b18b289d7f25a1f8378eaab0467b83af90a4ac0b616437ecb2d0bc1397924: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f47cm/dashboard-metrics-scraper" id=4e4056a6-e726-4002-bf6e-7fc1d2855f42 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:36:39 default-k8s-diff-port-593480 crio[650]: time="2025-10-18T09:36:39.985856613Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:36:39 default-k8s-diff-port-593480 crio[650]: time="2025-10-18T09:36:39.989439225Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:36:39 default-k8s-diff-port-593480 crio[650]: time="2025-10-18T09:36:39.989471118Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:36:39 default-k8s-diff-port-593480 crio[650]: time="2025-10-18T09:36:39.989493895Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:36:39 default-k8s-diff-port-593480 crio[650]: time="2025-10-18T09:36:39.99267468Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:36:39 default-k8s-diff-port-593480 crio[650]: time="2025-10-18T09:36:39.992708705Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:36:39 default-k8s-diff-port-593480 crio[650]: time="2025-10-18T09:36:39.99273323Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:36:39 default-k8s-diff-port-593480 crio[650]: time="2025-10-18T09:36:39.996309089Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:36:39 default-k8s-diff-port-593480 crio[650]: time="2025-10-18T09:36:39.996342992Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:36:39 default-k8s-diff-port-593480 crio[650]: time="2025-10-18T09:36:39.99637079Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:36:39 default-k8s-diff-port-593480 crio[650]: time="2025-10-18T09:36:39.999413454Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:36:39 default-k8s-diff-port-593480 crio[650]: time="2025-10-18T09:36:39.999565507Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:36:39 default-k8s-diff-port-593480 crio[650]: time="2025-10-18T09:36:39.9996501Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:36:40 default-k8s-diff-port-593480 crio[650]: time="2025-10-18T09:36:40.003697421Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:36:40 default-k8s-diff-port-593480 crio[650]: time="2025-10-18T09:36:40.003999664Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	c86fad3e7dc49       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago       Exited              dashboard-metrics-scraper   2                   06ce6b13e3aee       dashboard-metrics-scraper-6ffb444bf9-f47cm             kubernetes-dashboard
	77edd5912d990       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           26 seconds ago       Running             storage-provisioner         2                   dd2c354cde8fb       storage-provisioner                                    kube-system
	972710a1973b9       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   47 seconds ago       Running             kubernetes-dashboard        0                   cbdce97f3968d       kubernetes-dashboard-855c9754f9-b2xsq                  kubernetes-dashboard
	f4f8772e3187d       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           58 seconds ago       Running             busybox                     1                   a78ed3c99378b       busybox                                                default
	e96bf01e397dc       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           58 seconds ago       Running             coredns                     1                   e9c9002f69570       coredns-66bc5c9577-lxwgf                               kube-system
	7f7413ae9355d       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           58 seconds ago       Running             kube-proxy                  1                   6e7b468a93731       kube-proxy-lz9p5                                       kube-system
	219038721a043       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           58 seconds ago       Exited              storage-provisioner         1                   dd2c354cde8fb       storage-provisioner                                    kube-system
	40b8d46047871       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           58 seconds ago       Running             kindnet-cni                 1                   3436b454fff20       kindnet-ptbw6                                          kube-system
	a2ae42e7111f6       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   3a754f6cfd109       kube-apiserver-default-k8s-diff-port-593480            kube-system
	52621647e0872       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   da3cad0a5e31a       etcd-default-k8s-diff-port-593480                      kube-system
	f25f31e0de7b1       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   353065344be32       kube-scheduler-default-k8s-diff-port-593480            kube-system
	733c7cf0be6cd       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   df40fc36d9569       kube-controller-manager-default-k8s-diff-port-593480   kube-system
	
	
	==> coredns [e96bf01e397dc74fec93b72b52bc80ee6fe7bddee4e09809a7a655beb5a2e18a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53589 - 36133 "HINFO IN 6823403413020491029.5837726841550140400. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013932338s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-593480
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-593480
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820
	                    minikube.k8s.io/name=default-k8s-diff-port-593480
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_34_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:34:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-593480
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:36:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:36:28 +0000   Sat, 18 Oct 2025 09:34:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:36:28 +0000   Sat, 18 Oct 2025 09:34:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:36:28 +0000   Sat, 18 Oct 2025 09:34:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:36:28 +0000   Sat, 18 Oct 2025 09:35:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-593480
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                49945b4a-cdd7-400f-9239-4b91af7db42e
	  Boot ID:                    65523ab2-bf15-4ba4-9086-da57024e96a9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 coredns-66bc5c9577-lxwgf                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m29s
	  kube-system                 etcd-default-k8s-diff-port-593480                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m34s
	  kube-system                 kindnet-ptbw6                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m30s
	  kube-system                 kube-apiserver-default-k8s-diff-port-593480             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m34s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-593480    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m34s
	  kube-system                 kube-proxy-lz9p5                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 kube-scheduler-default-k8s-diff-port-593480             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m34s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-f47cm              0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-b2xsq                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m27s                  kube-proxy       
	  Normal   Starting                 56s                    kube-proxy       
	  Warning  CgroupV1                 2m44s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m44s (x8 over 2m44s)  kubelet          Node default-k8s-diff-port-593480 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m44s (x8 over 2m44s)  kubelet          Node default-k8s-diff-port-593480 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m44s (x8 over 2m44s)  kubelet          Node default-k8s-diff-port-593480 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m34s                  kubelet          Node default-k8s-diff-port-593480 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 2m34s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m34s                  kubelet          Node default-k8s-diff-port-593480 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m34s                  kubelet          Node default-k8s-diff-port-593480 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m34s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m30s                  node-controller  Node default-k8s-diff-port-593480 event: Registered Node default-k8s-diff-port-593480 in Controller
	  Normal   NodeReady                107s                   kubelet          Node default-k8s-diff-port-593480 status is now: NodeReady
	  Normal   Starting                 68s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 68s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  68s (x8 over 68s)      kubelet          Node default-k8s-diff-port-593480 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    68s (x8 over 68s)      kubelet          Node default-k8s-diff-port-593480 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     68s (x8 over 68s)      kubelet          Node default-k8s-diff-port-593480 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           55s                    node-controller  Node default-k8s-diff-port-593480 event: Registered Node default-k8s-diff-port-593480 in Controller
	
	
	==> dmesg <==
	[ +29.155522] overlayfs: idmapped layers are currently not supported
	[Oct18 09:15] overlayfs: idmapped layers are currently not supported
	[ +11.661984] kauditd_printk_skb: 8 callbacks suppressed
	[ +12.494912] overlayfs: idmapped layers are currently not supported
	[Oct18 09:16] overlayfs: idmapped layers are currently not supported
	[Oct18 09:18] overlayfs: idmapped layers are currently not supported
	[Oct18 09:20] overlayfs: idmapped layers are currently not supported
	[  +1.396393] overlayfs: idmapped layers are currently not supported
	[Oct18 09:22] overlayfs: idmapped layers are currently not supported
	[ +27.782946] overlayfs: idmapped layers are currently not supported
	[Oct18 09:25] overlayfs: idmapped layers are currently not supported
	[Oct18 09:26] overlayfs: idmapped layers are currently not supported
	[Oct18 09:27] overlayfs: idmapped layers are currently not supported
	[Oct18 09:28] overlayfs: idmapped layers are currently not supported
	[ +34.309418] overlayfs: idmapped layers are currently not supported
	[Oct18 09:30] overlayfs: idmapped layers are currently not supported
	[Oct18 09:31] overlayfs: idmapped layers are currently not supported
	[ +30.479187] overlayfs: idmapped layers are currently not supported
	[Oct18 09:32] overlayfs: idmapped layers are currently not supported
	[Oct18 09:33] overlayfs: idmapped layers are currently not supported
	[Oct18 09:34] overlayfs: idmapped layers are currently not supported
	[ +34.458375] overlayfs: idmapped layers are currently not supported
	[Oct18 09:35] overlayfs: idmapped layers are currently not supported
	[ +33.991180] overlayfs: idmapped layers are currently not supported
	[Oct18 09:36] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [52621647e0872882c5501e4bb01f9aa34bd6d544528f4617f5c91ad85298df0c] <==
	{"level":"warn","ts":"2025-10-18T09:35:54.953982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:55.016014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:55.057727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:55.164262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:55.220317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:55.272835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:55.306096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:55.340660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:55.365609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:55.429826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:55.509890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:55.510982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:55.562582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:55.584447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:55.635924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:55.674255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:55.732429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:55.775975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:55.800471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:55.836775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:55.884542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:55.902454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:55.947330Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:55.956900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:56.130621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36990","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:36:57 up 11:19,  0 user,  load average: 3.46, 3.54, 2.90
	Linux default-k8s-diff-port-593480 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [40b8d460478714d481d2976c4d0eab5fc8a6be7829e3e42b66c70ad0ca58af09] <==
	I1018 09:35:59.576785       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:35:59.578944       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1018 09:35:59.579109       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:35:59.579122       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:35:59.579134       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:35:59Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:35:59.985256       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:35:59.985284       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:35:59.985293       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:36:00.016340       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 09:36:29.985252       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 09:36:30.017419       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 09:36:30.021444       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 09:36:30.021550       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1018 09:36:31.507928       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:36:31.507991       1 metrics.go:72] Registering metrics
	I1018 09:36:31.508053       1 controller.go:711] "Syncing nftables rules"
	I1018 09:36:39.985533       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 09:36:39.985588       1 main.go:301] handling current node
	I1018 09:36:49.984856       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 09:36:49.984892       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a2ae42e7111f68e250d80963ab8db67a0cbd21a5286168c732b5ae60441c17b7] <==
	I1018 09:35:57.710024       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 09:35:57.710070       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1018 09:35:57.710145       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 09:35:57.710304       1 aggregator.go:171] initial CRD sync complete...
	I1018 09:35:57.710315       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 09:35:57.710321       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 09:35:57.710326       1 cache.go:39] Caches are synced for autoregister controller
	I1018 09:35:57.710546       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 09:35:57.731115       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 09:35:57.731180       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 09:35:57.749807       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 09:35:57.749835       1 policy_source.go:240] refreshing policies
	E1018 09:35:57.781670       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 09:35:57.797802       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:35:58.335866       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 09:35:58.358465       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:36:00.411649       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 09:36:00.613543       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 09:36:00.685730       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:36:00.701585       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:36:00.817372       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.247.219"}
	I1018 09:36:00.838773       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.39.252"}
	I1018 09:36:02.480700       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 09:36:02.674811       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 09:36:02.729567       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [733c7cf0be6cd400ab00223c34a62e45c087d4073aeca1345162c44182d78944] <==
	I1018 09:36:02.272355       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 09:36:02.272369       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 09:36:02.279889       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 09:36:02.291266       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:36:02.294075       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 09:36:02.294212       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 09:36:02.294296       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 09:36:02.294337       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 09:36:02.294365       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 09:36:02.295494       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 09:36:02.296919       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1018 09:36:02.297244       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:36:02.303879       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 09:36:02.304025       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 09:36:02.304119       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-593480"
	I1018 09:36:02.304202       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 09:36:02.304714       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 09:36:02.313019       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 09:36:02.325601       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 09:36:02.325917       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 09:36:02.333332       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 09:36:02.334497       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 09:36:02.334541       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 09:36:02.340756       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:36:02.341867       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	
	
	==> kube-proxy [7f7413ae9355d190c2c94e35f835f62c3b9bfedcd668a89a6a63bee7beadb8e8] <==
	I1018 09:36:00.626948       1 server_linux.go:53] "Using iptables proxy"
	I1018 09:36:00.801215       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 09:36:00.908945       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:36:00.918867       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1018 09:36:00.919050       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:36:01.050343       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:36:01.050415       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:36:01.072119       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:36:01.072468       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:36:01.072490       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:36:01.080629       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:36:01.080706       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:36:01.081058       1 config.go:200] "Starting service config controller"
	I1018 09:36:01.081121       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:36:01.081625       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:36:01.082493       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:36:01.083424       1 config.go:309] "Starting node config controller"
	I1018 09:36:01.083489       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:36:01.083521       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 09:36:01.181368       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 09:36:01.181506       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 09:36:01.183056       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [f25f31e0de7b14e6ec30c9543448ad6c36163463aa5bb218aac0f99a95ccfe92] <==
	I1018 09:35:55.839764       1 serving.go:386] Generated self-signed cert in-memory
	I1018 09:36:00.711054       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 09:36:00.711246       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:36:00.731048       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1018 09:36:00.731099       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1018 09:36:00.731125       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 09:36:00.731150       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:36:00.731169       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:36:00.731183       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 09:36:00.731187       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 09:36:00.731191       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 09:36:00.831697       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 09:36:00.831814       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1018 09:36:00.831832       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:36:03 default-k8s-diff-port-593480 kubelet[777]: I1018 09:36:03.015962     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbdc7\" (UniqueName: \"kubernetes.io/projected/b6bcdba7-3aa5-4913-b828-bba9ad382a0a-kube-api-access-lbdc7\") pod \"kubernetes-dashboard-855c9754f9-b2xsq\" (UID: \"b6bcdba7-3aa5-4913-b828-bba9ad382a0a\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-b2xsq"
	Oct 18 09:36:03 default-k8s-diff-port-593480 kubelet[777]: I1018 09:36:03.016646     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/181e6493-517e-4171-abff-1268e0723fd4-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-f47cm\" (UID: \"181e6493-517e-4171-abff-1268e0723fd4\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f47cm"
	Oct 18 09:36:03 default-k8s-diff-port-593480 kubelet[777]: I1018 09:36:03.016824     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfgwg\" (UniqueName: \"kubernetes.io/projected/181e6493-517e-4171-abff-1268e0723fd4-kube-api-access-tfgwg\") pod \"dashboard-metrics-scraper-6ffb444bf9-f47cm\" (UID: \"181e6493-517e-4171-abff-1268e0723fd4\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f47cm"
	Oct 18 09:36:03 default-k8s-diff-port-593480 kubelet[777]: I1018 09:36:03.016962     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b6bcdba7-3aa5-4913-b828-bba9ad382a0a-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-b2xsq\" (UID: \"b6bcdba7-3aa5-4913-b828-bba9ad382a0a\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-b2xsq"
	Oct 18 09:36:03 default-k8s-diff-port-593480 kubelet[777]: W1018 09:36:03.220497     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/bfa509b1b05342b217b1ce963a2db530ef024766b92757db9c1cb5a7153e9679/crio-cbdce97f3968d5cc491fa74cf0711ccb25fa161d09a448c85763e3a6cbe07fd1 WatchSource:0}: Error finding container cbdce97f3968d5cc491fa74cf0711ccb25fa161d09a448c85763e3a6cbe07fd1: Status 404 returned error can't find the container with id cbdce97f3968d5cc491fa74cf0711ccb25fa161d09a448c85763e3a6cbe07fd1
	Oct 18 09:36:03 default-k8s-diff-port-593480 kubelet[777]: W1018 09:36:03.238726     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/bfa509b1b05342b217b1ce963a2db530ef024766b92757db9c1cb5a7153e9679/crio-06ce6b13e3aee14b3382a8fbc4e4759a9c4dadba8ab6952b80f633fac4f0a880 WatchSource:0}: Error finding container 06ce6b13e3aee14b3382a8fbc4e4759a9c4dadba8ab6952b80f633fac4f0a880: Status 404 returned error can't find the container with id 06ce6b13e3aee14b3382a8fbc4e4759a9c4dadba8ab6952b80f633fac4f0a880
	Oct 18 09:36:16 default-k8s-diff-port-593480 kubelet[777]: I1018 09:36:16.650428     777 scope.go:117] "RemoveContainer" containerID="9e628c53e81d3a329adfe75e4720fcdcf60f2bfb241bf5eb77346eadefb46a4d"
	Oct 18 09:36:16 default-k8s-diff-port-593480 kubelet[777]: I1018 09:36:16.681463     777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-b2xsq" podStartSLOduration=7.582776215 podStartE2EDuration="14.681446392s" podCreationTimestamp="2025-10-18 09:36:02 +0000 UTC" firstStartedPulling="2025-10-18 09:36:03.224560795 +0000 UTC m=+14.192936861" lastFinishedPulling="2025-10-18 09:36:10.323230972 +0000 UTC m=+21.291607038" observedRunningTime="2025-10-18 09:36:10.660726546 +0000 UTC m=+21.629102629" watchObservedRunningTime="2025-10-18 09:36:16.681446392 +0000 UTC m=+27.649822459"
	Oct 18 09:36:17 default-k8s-diff-port-593480 kubelet[777]: I1018 09:36:17.654521     777 scope.go:117] "RemoveContainer" containerID="9e628c53e81d3a329adfe75e4720fcdcf60f2bfb241bf5eb77346eadefb46a4d"
	Oct 18 09:36:17 default-k8s-diff-port-593480 kubelet[777]: I1018 09:36:17.654828     777 scope.go:117] "RemoveContainer" containerID="043b18b289d7f25a1f8378eaab0467b83af90a4ac0b616437ecb2d0bc1397924"
	Oct 18 09:36:17 default-k8s-diff-port-593480 kubelet[777]: E1018 09:36:17.654969     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-f47cm_kubernetes-dashboard(181e6493-517e-4171-abff-1268e0723fd4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f47cm" podUID="181e6493-517e-4171-abff-1268e0723fd4"
	Oct 18 09:36:18 default-k8s-diff-port-593480 kubelet[777]: I1018 09:36:18.658629     777 scope.go:117] "RemoveContainer" containerID="043b18b289d7f25a1f8378eaab0467b83af90a4ac0b616437ecb2d0bc1397924"
	Oct 18 09:36:18 default-k8s-diff-port-593480 kubelet[777]: E1018 09:36:18.658795     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-f47cm_kubernetes-dashboard(181e6493-517e-4171-abff-1268e0723fd4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f47cm" podUID="181e6493-517e-4171-abff-1268e0723fd4"
	Oct 18 09:36:23 default-k8s-diff-port-593480 kubelet[777]: I1018 09:36:23.172830     777 scope.go:117] "RemoveContainer" containerID="043b18b289d7f25a1f8378eaab0467b83af90a4ac0b616437ecb2d0bc1397924"
	Oct 18 09:36:23 default-k8s-diff-port-593480 kubelet[777]: E1018 09:36:23.173043     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-f47cm_kubernetes-dashboard(181e6493-517e-4171-abff-1268e0723fd4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f47cm" podUID="181e6493-517e-4171-abff-1268e0723fd4"
	Oct 18 09:36:30 default-k8s-diff-port-593480 kubelet[777]: I1018 09:36:30.688686     777 scope.go:117] "RemoveContainer" containerID="219038721a04310118069d66f5e074f6d504bd7804e061291016a223d0b92b7c"
	Oct 18 09:36:36 default-k8s-diff-port-593480 kubelet[777]: I1018 09:36:36.341809     777 scope.go:117] "RemoveContainer" containerID="043b18b289d7f25a1f8378eaab0467b83af90a4ac0b616437ecb2d0bc1397924"
	Oct 18 09:36:36 default-k8s-diff-port-593480 kubelet[777]: I1018 09:36:36.705353     777 scope.go:117] "RemoveContainer" containerID="043b18b289d7f25a1f8378eaab0467b83af90a4ac0b616437ecb2d0bc1397924"
	Oct 18 09:36:36 default-k8s-diff-port-593480 kubelet[777]: I1018 09:36:36.705719     777 scope.go:117] "RemoveContainer" containerID="c86fad3e7dc491c95dc3fffaf85c7f92e7712435dfa9dc8dc3aad44d435ee3b6"
	Oct 18 09:36:36 default-k8s-diff-port-593480 kubelet[777]: E1018 09:36:36.705967     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-f47cm_kubernetes-dashboard(181e6493-517e-4171-abff-1268e0723fd4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f47cm" podUID="181e6493-517e-4171-abff-1268e0723fd4"
	Oct 18 09:36:43 default-k8s-diff-port-593480 kubelet[777]: I1018 09:36:43.172638     777 scope.go:117] "RemoveContainer" containerID="c86fad3e7dc491c95dc3fffaf85c7f92e7712435dfa9dc8dc3aad44d435ee3b6"
	Oct 18 09:36:43 default-k8s-diff-port-593480 kubelet[777]: E1018 09:36:43.173294     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-f47cm_kubernetes-dashboard(181e6493-517e-4171-abff-1268e0723fd4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f47cm" podUID="181e6493-517e-4171-abff-1268e0723fd4"
	Oct 18 09:36:54 default-k8s-diff-port-593480 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 09:36:54 default-k8s-diff-port-593480 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 09:36:54 default-k8s-diff-port-593480 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [972710a1973b9cf8acbd41550a4a3ebfb5ec96b320e8f9397a2deaf9b46c3e0c] <==
	2025/10/18 09:36:10 Using namespace: kubernetes-dashboard
	2025/10/18 09:36:10 Using in-cluster config to connect to apiserver
	2025/10/18 09:36:10 Using secret token for csrf signing
	2025/10/18 09:36:10 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 09:36:10 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 09:36:10 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 09:36:10 Generating JWE encryption key
	2025/10/18 09:36:10 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 09:36:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 09:36:11 Initializing JWE encryption key from synchronized object
	2025/10/18 09:36:11 Creating in-cluster Sidecar client
	2025/10/18 09:36:11 Serving insecurely on HTTP port: 9090
	2025/10/18 09:36:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 09:36:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 09:36:10 Starting overwatch
	
	
	==> storage-provisioner [219038721a04310118069d66f5e074f6d504bd7804e061291016a223d0b92b7c] <==
	I1018 09:36:00.511740       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 09:36:30.514733       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [77edd5912d990436794cf936b8f51159dc4b9c1c9baaa23fc03d051c5c9c7c44] <==
	I1018 09:36:30.739444       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 09:36:30.751706       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 09:36:30.751756       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 09:36:30.757571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:34.212614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:38.472599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:42.071481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:45.125590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:48.147649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:48.155485       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:36:48.155624       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 09:36:48.155801       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-593480_eb646e8d-a990-4470-8e0d-5e776b980fbc!
	I1018 09:36:48.155914       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"75bb76a9-c543-40fa-ba6e-108e81012c94", APIVersion:"v1", ResourceVersion:"689", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-593480_eb646e8d-a990-4470-8e0d-5e776b980fbc became leader
	W1018 09:36:48.165878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:48.169351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:36:48.256816       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-593480_eb646e8d-a990-4470-8e0d-5e776b980fbc!
	W1018 09:36:50.172673       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:50.180529       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:52.183726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:52.187925       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:54.191250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:54.197979       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:56.200698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:56.206476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-593480 -n default-k8s-diff-port-593480
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-593480 -n default-k8s-diff-port-593480: exit status 2 (438.073281ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-593480 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-593480
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-593480:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bfa509b1b05342b217b1ce963a2db530ef024766b92757db9c1cb5a7153e9679",
	        "Created": "2025-10-18T09:33:54.439784864Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1486570,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:35:40.932890437Z",
	            "FinishedAt": "2025-10-18T09:35:38.186218456Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/bfa509b1b05342b217b1ce963a2db530ef024766b92757db9c1cb5a7153e9679/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bfa509b1b05342b217b1ce963a2db530ef024766b92757db9c1cb5a7153e9679/hostname",
	        "HostsPath": "/var/lib/docker/containers/bfa509b1b05342b217b1ce963a2db530ef024766b92757db9c1cb5a7153e9679/hosts",
	        "LogPath": "/var/lib/docker/containers/bfa509b1b05342b217b1ce963a2db530ef024766b92757db9c1cb5a7153e9679/bfa509b1b05342b217b1ce963a2db530ef024766b92757db9c1cb5a7153e9679-json.log",
	        "Name": "/default-k8s-diff-port-593480",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-593480:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-593480",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bfa509b1b05342b217b1ce963a2db530ef024766b92757db9c1cb5a7153e9679",
	                "LowerDir": "/var/lib/docker/overlay2/29fb3a737dd79177123811bfa1da8769085432f42a0fccd01372ece37a6300a7-init/diff:/var/lib/docker/overlay2/60519750c2737db0dfdb37adf468f6414129c2b5dcb7218ea59afbc63bfd537f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/29fb3a737dd79177123811bfa1da8769085432f42a0fccd01372ece37a6300a7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/29fb3a737dd79177123811bfa1da8769085432f42a0fccd01372ece37a6300a7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/29fb3a737dd79177123811bfa1da8769085432f42a0fccd01372ece37a6300a7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-593480",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-593480/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-593480",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-593480",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-593480",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "afd171acdd1090977f4160057f319ed6d18dd04b4eb1326197c8da9a66878efc",
	            "SandboxKey": "/var/run/docker/netns/afd171acdd10",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34921"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34922"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34925"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34923"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34924"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-593480": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:b1:0c:5b:48:93",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1dd19821ca12a42bf31368ca6b87d68bd1622c2ff94469b47f038636ec26347a",
	                    "EndpointID": "d8e18e03504fb448faf3b46baccad74dd11e879eff1380c00648442919c324f3",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-593480",
	                        "bfa509b1b053"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-593480 -n default-k8s-diff-port-593480
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-593480 -n default-k8s-diff-port-593480: exit status 2 (347.916624ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-593480 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-593480 logs -n 25: (1.664153106s)
I1018 09:37:00.754943 1276097 config.go:182] Loaded profile config "auto-275703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p no-preload-886951 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │                     │
	│ delete  │ -p no-preload-886951                                                                                                                                                                                                                          │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:33 UTC │
	│ delete  │ -p no-preload-886951                                                                                                                                                                                                                          │ no-preload-886951            │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:33 UTC │
	│ delete  │ -p disable-driver-mounts-877810                                                                                                                                                                                                               │ disable-driver-mounts-877810 │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:33 UTC │
	│ start   │ -p default-k8s-diff-port-593480 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-593480 │ jenkins │ v1.37.0 │ 18 Oct 25 09:33 UTC │ 18 Oct 25 09:35 UTC │
	│ image   │ embed-certs-559379 image list --format=json                                                                                                                                                                                                   │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │ 18 Oct 25 09:34 UTC │
	│ pause   │ -p embed-certs-559379 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │                     │
	│ delete  │ -p embed-certs-559379                                                                                                                                                                                                                         │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │ 18 Oct 25 09:34 UTC │
	│ delete  │ -p embed-certs-559379                                                                                                                                                                                                                         │ embed-certs-559379           │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │ 18 Oct 25 09:34 UTC │
	│ start   │ -p newest-cni-250274 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-250274            │ jenkins │ v1.37.0 │ 18 Oct 25 09:34 UTC │ 18 Oct 25 09:35 UTC │
	│ addons  │ enable metrics-server -p newest-cni-250274 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-250274            │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │                     │
	│ stop    │ -p newest-cni-250274 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-250274            │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │ 18 Oct 25 09:35 UTC │
	│ addons  │ enable dashboard -p newest-cni-250274 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-250274            │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │ 18 Oct 25 09:35 UTC │
	│ start   │ -p newest-cni-250274 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-250274            │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │ 18 Oct 25 09:35 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-593480 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-593480 │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │                     │
	│ image   │ newest-cni-250274 image list --format=json                                                                                                                                                                                                    │ newest-cni-250274            │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │ 18 Oct 25 09:35 UTC │
	│ stop    │ -p default-k8s-diff-port-593480 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-593480 │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │ 18 Oct 25 09:35 UTC │
	│ pause   │ -p newest-cni-250274 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-250274            │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │                     │
	│ delete  │ -p newest-cni-250274                                                                                                                                                                                                                          │ newest-cni-250274            │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │ 18 Oct 25 09:35 UTC │
	│ delete  │ -p newest-cni-250274                                                                                                                                                                                                                          │ newest-cni-250274            │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │ 18 Oct 25 09:35 UTC │
	│ start   │ -p auto-275703 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-275703                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-593480 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-593480 │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │ 18 Oct 25 09:35 UTC │
	│ start   │ -p default-k8s-diff-port-593480 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-593480 │ jenkins │ v1.37.0 │ 18 Oct 25 09:35 UTC │ 18 Oct 25 09:36 UTC │
	│ image   │ default-k8s-diff-port-593480 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-593480 │ jenkins │ v1.37.0 │ 18 Oct 25 09:36 UTC │ 18 Oct 25 09:36 UTC │
	│ pause   │ -p default-k8s-diff-port-593480 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-593480 │ jenkins │ v1.37.0 │ 18 Oct 25 09:36 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:35:40
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:35:40.410325 1486102 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:35:40.410429 1486102 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:35:40.410435 1486102 out.go:374] Setting ErrFile to fd 2...
	I1018 09:35:40.410439 1486102 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:35:40.410769 1486102 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 09:35:40.411193 1486102 out.go:368] Setting JSON to false
	I1018 09:35:40.412140 1486102 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":40688,"bootTime":1760739453,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1018 09:35:40.412232 1486102 start.go:141] virtualization:  
	I1018 09:35:40.415660 1486102 out.go:179] * [default-k8s-diff-port-593480] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 09:35:40.420071 1486102 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 09:35:40.420232 1486102 notify.go:220] Checking for updates...
	I1018 09:35:40.423623 1486102 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:35:40.427168 1486102 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 09:35:40.430236 1486102 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-1274243/.minikube
	I1018 09:35:40.433128 1486102 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 09:35:40.436560 1486102 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:35:40.439981 1486102 config.go:182] Loaded profile config "default-k8s-diff-port-593480": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:35:40.440624 1486102 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:35:40.480237 1486102 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 09:35:40.480341 1486102 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:35:40.625387 1486102 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-10-18 09:35:40.612686637 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:35:40.625492 1486102 docker.go:318] overlay module found
	I1018 09:35:40.628975 1486102 out.go:179] * Using the docker driver based on existing profile
	I1018 09:35:40.632089 1486102 start.go:305] selected driver: docker
	I1018 09:35:40.632105 1486102 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-593480 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-593480 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:35:40.632205 1486102 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:35:40.633013 1486102 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:35:40.780240 1486102 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2025-10-18 09:35:40.764789613 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:35:40.780617 1486102 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:35:40.780638 1486102 cni.go:84] Creating CNI manager for ""
	I1018 09:35:40.780687 1486102 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:35:40.780725 1486102 start.go:349] cluster config:
	{Name:default-k8s-diff-port-593480 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-593480 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:35:40.785015 1486102 out.go:179] * Starting "default-k8s-diff-port-593480" primary control-plane node in "default-k8s-diff-port-593480" cluster
	I1018 09:35:40.788109 1486102 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:35:40.791057 1486102 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:35:40.793923 1486102 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:35:40.793979 1486102 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 09:35:40.794022 1486102 cache.go:58] Caching tarball of preloaded images
	I1018 09:35:40.794108 1486102 preload.go:233] Found /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1018 09:35:40.794116 1486102 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 09:35:40.794222 1486102 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/config.json ...
	I1018 09:35:40.794387 1486102 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:35:40.842980 1486102 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:35:40.843001 1486102 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:35:40.843014 1486102 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:35:40.843035 1486102 start.go:360] acquireMachinesLock for default-k8s-diff-port-593480: {Name:mk139126e1ddb766657a5fd510c1f904e5550412 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:35:40.843095 1486102 start.go:364] duration metric: took 38.637µs to acquireMachinesLock for "default-k8s-diff-port-593480"
	I1018 09:35:40.843114 1486102 start.go:96] Skipping create...Using existing machine configuration
	I1018 09:35:40.843120 1486102 fix.go:54] fixHost starting: 
	I1018 09:35:40.843373 1486102 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-593480 --format={{.State.Status}}
	I1018 09:35:40.874865 1486102 fix.go:112] recreateIfNeeded on default-k8s-diff-port-593480: state=Stopped err=<nil>
	W1018 09:35:40.874893 1486102 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 09:35:40.042570 1485573 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-275703:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.452431146s)
	I1018 09:35:40.042611 1485573 kic.go:203] duration metric: took 4.452627835s to extract preloaded images to volume ...
	W1018 09:35:40.042781 1485573 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1018 09:35:40.042954 1485573 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 09:35:40.128682 1485573 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-275703 --name auto-275703 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-275703 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-275703 --network auto-275703 --ip 192.168.76.2 --volume auto-275703:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 09:35:40.502938 1485573 cli_runner.go:164] Run: docker container inspect auto-275703 --format={{.State.Running}}
	I1018 09:35:40.534556 1485573 cli_runner.go:164] Run: docker container inspect auto-275703 --format={{.State.Status}}
	I1018 09:35:40.592873 1485573 cli_runner.go:164] Run: docker exec auto-275703 stat /var/lib/dpkg/alternatives/iptables
	I1018 09:35:40.681972 1485573 oci.go:144] the created container "auto-275703" has a running status.
	I1018 09:35:40.682020 1485573 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/auto-275703/id_rsa...
	I1018 09:35:41.919291 1485573 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/auto-275703/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 09:35:41.951589 1485573 cli_runner.go:164] Run: docker container inspect auto-275703 --format={{.State.Status}}
	I1018 09:35:41.978188 1485573 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 09:35:41.978207 1485573 kic_runner.go:114] Args: [docker exec --privileged auto-275703 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 09:35:42.057190 1485573 cli_runner.go:164] Run: docker container inspect auto-275703 --format={{.State.Status}}
	I1018 09:35:42.079632 1485573 machine.go:93] provisionDockerMachine start ...
	I1018 09:35:42.079759 1485573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-275703
	I1018 09:35:42.127028 1485573 main.go:141] libmachine: Using SSH client type: native
	I1018 09:35:42.127415 1485573 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34916 <nil> <nil>}
	I1018 09:35:42.127426 1485573 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:35:42.436060 1485573 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-275703
	
	I1018 09:35:42.436087 1485573 ubuntu.go:182] provisioning hostname "auto-275703"
	I1018 09:35:42.436147 1485573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-275703
	I1018 09:35:42.454147 1485573 main.go:141] libmachine: Using SSH client type: native
	I1018 09:35:42.454450 1485573 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34916 <nil> <nil>}
	I1018 09:35:42.454461 1485573 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-275703 && echo "auto-275703" | sudo tee /etc/hostname
	I1018 09:35:42.624105 1485573 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-275703
	
	I1018 09:35:42.624199 1485573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-275703
	I1018 09:35:42.642273 1485573 main.go:141] libmachine: Using SSH client type: native
	I1018 09:35:42.642602 1485573 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34916 <nil> <nil>}
	I1018 09:35:42.642625 1485573 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-275703' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-275703/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-275703' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:35:42.792026 1485573 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:35:42.792052 1485573 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-1274243/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-1274243/.minikube}
	I1018 09:35:42.792070 1485573 ubuntu.go:190] setting up certificates
	I1018 09:35:42.792119 1485573 provision.go:84] configureAuth start
	I1018 09:35:42.792203 1485573 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-275703
	I1018 09:35:42.809331 1485573 provision.go:143] copyHostCerts
	I1018 09:35:42.809424 1485573 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem, removing ...
	I1018 09:35:42.809438 1485573 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem
	I1018 09:35:42.809517 1485573 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem (1675 bytes)
	I1018 09:35:42.809927 1485573 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem, removing ...
	I1018 09:35:42.809944 1485573 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem
	I1018 09:35:42.809990 1485573 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem (1078 bytes)
	I1018 09:35:42.810055 1485573 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem, removing ...
	I1018 09:35:42.810060 1485573 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem
	I1018 09:35:42.810086 1485573 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem (1123 bytes)
	I1018 09:35:42.810140 1485573 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem org=jenkins.auto-275703 san=[127.0.0.1 192.168.76.2 auto-275703 localhost minikube]
	I1018 09:35:43.486206 1485573 provision.go:177] copyRemoteCerts
	I1018 09:35:43.486305 1485573 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:35:43.486371 1485573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-275703
	I1018 09:35:43.508827 1485573 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34916 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/auto-275703/id_rsa Username:docker}
	I1018 09:35:43.615513 1485573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:35:43.633149 1485573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1018 09:35:43.651206 1485573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:35:43.668576 1485573 provision.go:87] duration metric: took 876.426438ms to configureAuth
	I1018 09:35:43.668645 1485573 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:35:43.668842 1485573 config.go:182] Loaded profile config "auto-275703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:35:43.668957 1485573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-275703
	I1018 09:35:43.686177 1485573 main.go:141] libmachine: Using SSH client type: native
	I1018 09:35:43.686485 1485573 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34916 <nil> <nil>}
	I1018 09:35:43.686505 1485573 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:35:43.938830 1485573 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:35:43.938850 1485573 machine.go:96] duration metric: took 1.859195232s to provisionDockerMachine
	I1018 09:35:43.938859 1485573 client.go:171] duration metric: took 9.054172048s to LocalClient.Create
	I1018 09:35:43.938879 1485573 start.go:167] duration metric: took 9.054252539s to libmachine.API.Create "auto-275703"
	I1018 09:35:43.938886 1485573 start.go:293] postStartSetup for "auto-275703" (driver="docker")
	I1018 09:35:43.938895 1485573 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:35:43.938956 1485573 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:35:43.939010 1485573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-275703
	I1018 09:35:43.957273 1485573 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34916 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/auto-275703/id_rsa Username:docker}
	I1018 09:35:44.064121 1485573 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:35:44.067600 1485573 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:35:44.067629 1485573 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:35:44.067641 1485573 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-1274243/.minikube/addons for local assets ...
	I1018 09:35:44.067699 1485573 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-1274243/.minikube/files for local assets ...
	I1018 09:35:44.067797 1485573 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem -> 12760972.pem in /etc/ssl/certs
	I1018 09:35:44.067935 1485573 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:35:44.075577 1485573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem --> /etc/ssl/certs/12760972.pem (1708 bytes)
	I1018 09:35:44.093209 1485573 start.go:296] duration metric: took 154.308961ms for postStartSetup
	I1018 09:35:44.093581 1485573 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-275703
	I1018 09:35:44.110543 1485573 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/config.json ...
	I1018 09:35:44.110825 1485573 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:35:44.110865 1485573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-275703
	I1018 09:35:44.127604 1485573 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34916 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/auto-275703/id_rsa Username:docker}
	I1018 09:35:44.229019 1485573 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:35:44.233806 1485573 start.go:128] duration metric: took 9.352752435s to createHost
	I1018 09:35:44.233828 1485573 start.go:83] releasing machines lock for "auto-275703", held for 9.352882252s
	I1018 09:35:44.233905 1485573 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-275703
	I1018 09:35:44.250245 1485573 ssh_runner.go:195] Run: cat /version.json
	I1018 09:35:44.250307 1485573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-275703
	I1018 09:35:44.250252 1485573 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:35:44.250448 1485573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-275703
	I1018 09:35:44.269547 1485573 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34916 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/auto-275703/id_rsa Username:docker}
	I1018 09:35:44.287594 1485573 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34916 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/auto-275703/id_rsa Username:docker}
	I1018 09:35:44.463203 1485573 ssh_runner.go:195] Run: systemctl --version
	I1018 09:35:44.469525 1485573 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:35:44.505349 1485573 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:35:44.509815 1485573 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:35:44.509885 1485573 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:35:44.539017 1485573 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1018 09:35:44.539042 1485573 start.go:495] detecting cgroup driver to use...
	I1018 09:35:44.539073 1485573 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 09:35:44.539130 1485573 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:35:44.556298 1485573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:35:44.571181 1485573 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:35:44.571255 1485573 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:35:44.590568 1485573 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:35:44.612530 1485573 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:35:40.878196 1486102 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-593480" ...
	I1018 09:35:40.878276 1486102 cli_runner.go:164] Run: docker start default-k8s-diff-port-593480
	I1018 09:35:41.395025 1486102 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-593480 --format={{.State.Status}}
	I1018 09:35:41.450953 1486102 kic.go:430] container "default-k8s-diff-port-593480" state is running.
	I1018 09:35:41.452014 1486102 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-593480
	I1018 09:35:41.503829 1486102 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/config.json ...
	I1018 09:35:41.504115 1486102 machine.go:93] provisionDockerMachine start ...
	I1018 09:35:41.504179 1486102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-593480
	I1018 09:35:41.565541 1486102 main.go:141] libmachine: Using SSH client type: native
	I1018 09:35:41.565866 1486102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34921 <nil> <nil>}
	I1018 09:35:41.565876 1486102 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:35:41.566539 1486102 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36150->127.0.0.1:34921: read: connection reset by peer
	I1018 09:35:44.723630 1486102 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-593480
	
	I1018 09:35:44.723659 1486102 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-593480"
	I1018 09:35:44.723727 1486102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-593480
	I1018 09:35:44.749611 1486102 main.go:141] libmachine: Using SSH client type: native
	I1018 09:35:44.749913 1486102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34921 <nil> <nil>}
	I1018 09:35:44.749925 1486102 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-593480 && echo "default-k8s-diff-port-593480" | sudo tee /etc/hostname
	I1018 09:35:44.939421 1486102 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-593480
	
	I1018 09:35:44.939582 1486102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-593480
	I1018 09:35:44.968276 1486102 main.go:141] libmachine: Using SSH client type: native
	I1018 09:35:44.968599 1486102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34921 <nil> <nil>}
	I1018 09:35:44.968626 1486102 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-593480' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-593480/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-593480' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:35:45.170969 1486102 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:35:45.171054 1486102 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21767-1274243/.minikube CaCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21767-1274243/.minikube}
	I1018 09:35:45.171091 1486102 ubuntu.go:190] setting up certificates
	I1018 09:35:45.171136 1486102 provision.go:84] configureAuth start
	I1018 09:35:45.171267 1486102 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-593480
	I1018 09:35:45.225892 1486102 provision.go:143] copyHostCerts
	I1018 09:35:45.225985 1486102 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem, removing ...
	I1018 09:35:45.226005 1486102 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem
	I1018 09:35:45.226091 1486102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.pem (1078 bytes)
	I1018 09:35:45.226201 1486102 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem, removing ...
	I1018 09:35:45.226208 1486102 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem
	I1018 09:35:45.226236 1486102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/cert.pem (1123 bytes)
	I1018 09:35:45.226293 1486102 exec_runner.go:144] found /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem, removing ...
	I1018 09:35:45.226299 1486102 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem
	I1018 09:35:45.226322 1486102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21767-1274243/.minikube/key.pem (1675 bytes)
	I1018 09:35:45.226377 1486102 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-593480 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-593480 localhost minikube]
	I1018 09:35:44.767900 1485573 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:35:44.919655 1485573 docker.go:234] disabling docker service ...
	I1018 09:35:44.919727 1485573 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:35:44.960560 1485573 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:35:44.980348 1485573 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:35:45.241874 1485573 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:35:45.484908 1485573 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:35:45.500763 1485573 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:35:45.517021 1485573 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:35:45.517086 1485573 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:45.526444 1485573 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 09:35:45.526516 1485573 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:45.535930 1485573 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:45.545142 1485573 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:45.554876 1485573 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:35:45.564531 1485573 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:45.573785 1485573 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:45.591106 1485573 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:45.600335 1485573 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:35:45.607996 1485573 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:35:45.615556 1485573 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:35:45.764580 1485573 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:35:45.923183 1485573 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:35:45.923259 1485573 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:35:45.928159 1485573 start.go:563] Will wait 60s for crictl version
	I1018 09:35:45.928224 1485573 ssh_runner.go:195] Run: which crictl
	I1018 09:35:45.932660 1485573 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:35:45.965585 1485573 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:35:45.965664 1485573 ssh_runner.go:195] Run: crio --version
	I1018 09:35:45.993991 1485573 ssh_runner.go:195] Run: crio --version
	I1018 09:35:46.030411 1485573 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:35:45.887768 1486102 provision.go:177] copyRemoteCerts
	I1018 09:35:45.887891 1486102 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:35:45.887965 1486102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-593480
	I1018 09:35:45.905677 1486102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34921 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/default-k8s-diff-port-593480/id_rsa Username:docker}
	I1018 09:35:46.021000 1486102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:35:46.043989 1486102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1018 09:35:46.064738 1486102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:35:46.090422 1486102 provision.go:87] duration metric: took 919.242204ms to configureAuth
	I1018 09:35:46.090453 1486102 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:35:46.090665 1486102 config.go:182] Loaded profile config "default-k8s-diff-port-593480": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:35:46.090773 1486102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-593480
	I1018 09:35:46.111168 1486102 main.go:141] libmachine: Using SSH client type: native
	I1018 09:35:46.111473 1486102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 34921 <nil> <nil>}
	I1018 09:35:46.111489 1486102 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:35:46.529271 1486102 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:35:46.529292 1486102 machine.go:96] duration metric: took 5.025166298s to provisionDockerMachine
	I1018 09:35:46.529302 1486102 start.go:293] postStartSetup for "default-k8s-diff-port-593480" (driver="docker")
	I1018 09:35:46.529313 1486102 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:35:46.529371 1486102 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:35:46.529416 1486102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-593480
	I1018 09:35:46.592009 1486102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34921 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/default-k8s-diff-port-593480/id_rsa Username:docker}
	I1018 09:35:46.697143 1486102 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:35:46.701337 1486102 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:35:46.701362 1486102 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:35:46.701373 1486102 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-1274243/.minikube/addons for local assets ...
	I1018 09:35:46.701428 1486102 filesync.go:126] Scanning /home/jenkins/minikube-integration/21767-1274243/.minikube/files for local assets ...
	I1018 09:35:46.701528 1486102 filesync.go:149] local asset: /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem -> 12760972.pem in /etc/ssl/certs
	I1018 09:35:46.701670 1486102 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:35:46.710645 1486102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem --> /etc/ssl/certs/12760972.pem (1708 bytes)
	I1018 09:35:46.731657 1486102 start.go:296] duration metric: took 202.339187ms for postStartSetup
	I1018 09:35:46.731813 1486102 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:35:46.731898 1486102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-593480
	I1018 09:35:46.750986 1486102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34921 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/default-k8s-diff-port-593480/id_rsa Username:docker}
	I1018 09:35:46.853146 1486102 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:35:46.858874 1486102 fix.go:56] duration metric: took 6.015746623s for fixHost
	I1018 09:35:46.858896 1486102 start.go:83] releasing machines lock for "default-k8s-diff-port-593480", held for 6.015792473s
	I1018 09:35:46.858972 1486102 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-593480
	I1018 09:35:46.879443 1486102 ssh_runner.go:195] Run: cat /version.json
	I1018 09:35:46.879515 1486102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-593480
	I1018 09:35:46.879814 1486102 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:35:46.879897 1486102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-593480
	I1018 09:35:46.903063 1486102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34921 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/default-k8s-diff-port-593480/id_rsa Username:docker}
	I1018 09:35:46.928174 1486102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34921 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/default-k8s-diff-port-593480/id_rsa Username:docker}
	I1018 09:35:47.020218 1486102 ssh_runner.go:195] Run: systemctl --version
	I1018 09:35:47.111346 1486102 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:35:47.158833 1486102 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:35:47.165593 1486102 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:35:47.165661 1486102 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:35:47.178658 1486102 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 09:35:47.178682 1486102 start.go:495] detecting cgroup driver to use...
	I1018 09:35:47.178716 1486102 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1018 09:35:47.178762 1486102 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:35:47.198584 1486102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:35:47.215337 1486102 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:35:47.215396 1486102 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:35:47.234320 1486102 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:35:47.248735 1486102 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:35:47.410228 1486102 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:35:47.608474 1486102 docker.go:234] disabling docker service ...
	I1018 09:35:47.608557 1486102 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:35:47.627278 1486102 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:35:47.642721 1486102 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:35:47.791719 1486102 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:35:47.945032 1486102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:35:47.959584 1486102 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:35:47.974728 1486102 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:35:47.974807 1486102 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:47.984458 1486102 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1018 09:35:47.984525 1486102 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:47.993927 1486102 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:48.003489 1486102 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:48.015790 1486102 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:35:48.026151 1486102 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:48.037092 1486102 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:48.046801 1486102 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:35:48.057549 1486102 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:35:48.066891 1486102 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:35:48.075819 1486102 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:35:48.293780 1486102 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:35:48.450383 1486102 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:35:48.450496 1486102 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:35:48.454826 1486102 start.go:563] Will wait 60s for crictl version
	I1018 09:35:48.454906 1486102 ssh_runner.go:195] Run: which crictl
	I1018 09:35:48.459369 1486102 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:35:48.492817 1486102 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:35:48.492942 1486102 ssh_runner.go:195] Run: crio --version
	I1018 09:35:48.532830 1486102 ssh_runner.go:195] Run: crio --version
	I1018 09:35:48.579040 1486102 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:35:46.033339 1485573 cli_runner.go:164] Run: docker network inspect auto-275703 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:35:46.062363 1485573 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 09:35:46.065904 1485573 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:35:46.076899 1485573 kubeadm.go:883] updating cluster {Name:auto-275703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-275703 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:35:46.077032 1485573 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:35:46.077091 1485573 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:35:46.119370 1485573 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:35:46.119390 1485573 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:35:46.119441 1485573 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:35:46.157672 1485573 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:35:46.157693 1485573 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:35:46.157700 1485573 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1018 09:35:46.157785 1485573 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-275703 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-275703 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:35:46.157870 1485573 ssh_runner.go:195] Run: crio config
	I1018 09:35:46.233844 1485573 cni.go:84] Creating CNI manager for ""
	I1018 09:35:46.233866 1485573 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:35:46.233888 1485573 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:35:46.233910 1485573 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-275703 NodeName:auto-275703 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:35:46.234036 1485573 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-275703"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:35:46.234104 1485573 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:35:46.243796 1485573 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:35:46.243905 1485573 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:35:46.251465 1485573 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1018 09:35:46.264263 1485573 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:35:46.276928 1485573 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1018 09:35:46.291499 1485573 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:35:46.295373 1485573 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:35:46.306255 1485573 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:35:46.447361 1485573 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:35:46.470644 1485573 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703 for IP: 192.168.76.2
	I1018 09:35:46.470669 1485573 certs.go:195] generating shared ca certs ...
	I1018 09:35:46.470685 1485573 certs.go:227] acquiring lock for ca certs: {Name:mk38b7543cc5a8f0209b20a590824c94ad8d0609 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:35:46.470825 1485573 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.key
	I1018 09:35:46.470877 1485573 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.key
	I1018 09:35:46.470889 1485573 certs.go:257] generating profile certs ...
	I1018 09:35:46.470947 1485573 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/client.key
	I1018 09:35:46.470961 1485573 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/client.crt with IP's: []
	I1018 09:35:47.132208 1485573 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/client.crt ...
	I1018 09:35:47.132242 1485573 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/client.crt: {Name:mkc4fece3eb0c9a2624664e3692305aa02595479 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:35:47.132463 1485573 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/client.key ...
	I1018 09:35:47.132480 1485573 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/client.key: {Name:mk6114ba1da7c76e85cfb7a65b5a952f9d736289 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:35:47.132612 1485573 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/apiserver.key.466c655c
	I1018 09:35:47.132643 1485573 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/apiserver.crt.466c655c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1018 09:35:47.366745 1485573 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/apiserver.crt.466c655c ...
	I1018 09:35:47.366827 1485573 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/apiserver.crt.466c655c: {Name:mk6aed3acea771965a2309baf2d1b151fe996c6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:35:47.367055 1485573 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/apiserver.key.466c655c ...
	I1018 09:35:47.367093 1485573 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/apiserver.key.466c655c: {Name:mk24b2edc779545824c94a396476e5f326938849 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:35:47.367217 1485573 certs.go:382] copying /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/apiserver.crt.466c655c -> /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/apiserver.crt
	I1018 09:35:47.367337 1485573 certs.go:386] copying /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/apiserver.key.466c655c -> /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/apiserver.key
	I1018 09:35:47.367428 1485573 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/proxy-client.key
	I1018 09:35:47.367460 1485573 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/proxy-client.crt with IP's: []
	I1018 09:35:48.288703 1485573 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/proxy-client.crt ...
	I1018 09:35:48.288776 1485573 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/proxy-client.crt: {Name:mk043e5152d1f5c945198728a86358f29b9fe528 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:35:48.288995 1485573 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/proxy-client.key ...
	I1018 09:35:48.289032 1485573 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/proxy-client.key: {Name:mkc5beb28c916e50b753161b57914e101a3a05b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:35:48.289253 1485573 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097.pem (1338 bytes)
	W1018 09:35:48.289322 1485573 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097_empty.pem, impossibly tiny 0 bytes
	I1018 09:35:48.289348 1485573 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:35:48.289390 1485573 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:35:48.289447 1485573 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:35:48.289489 1485573 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem (1675 bytes)
	I1018 09:35:48.289565 1485573 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem (1708 bytes)
	I1018 09:35:48.290151 1485573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:35:48.312533 1485573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 09:35:48.332735 1485573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:35:48.352550 1485573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:35:48.377965 1485573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1018 09:35:48.401875 1485573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 09:35:48.425433 1485573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:35:48.445818 1485573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 09:35:48.477771 1485573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem --> /usr/share/ca-certificates/12760972.pem (1708 bytes)
	I1018 09:35:48.499285 1485573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:35:48.517411 1485573 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097.pem --> /usr/share/ca-certificates/1276097.pem (1338 bytes)
	I1018 09:35:48.535701 1485573 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:35:48.548782 1485573 ssh_runner.go:195] Run: openssl version
	I1018 09:35:48.555452 1485573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12760972.pem && ln -fs /usr/share/ca-certificates/12760972.pem /etc/ssl/certs/12760972.pem"
	I1018 09:35:48.564189 1485573 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12760972.pem
	I1018 09:35:48.568749 1485573 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 08:36 /usr/share/ca-certificates/12760972.pem
	I1018 09:35:48.568857 1485573 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12760972.pem
	I1018 09:35:48.621739 1485573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12760972.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:35:48.630761 1485573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:35:48.643972 1485573 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:35:48.648462 1485573 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:35:48.648526 1485573 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:35:48.693069 1485573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:35:48.701325 1485573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1276097.pem && ln -fs /usr/share/ca-certificates/1276097.pem /etc/ssl/certs/1276097.pem"
	I1018 09:35:48.710577 1485573 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1276097.pem
	I1018 09:35:48.714414 1485573 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 08:36 /usr/share/ca-certificates/1276097.pem
	I1018 09:35:48.714498 1485573 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1276097.pem
	I1018 09:35:48.758552 1485573 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1276097.pem /etc/ssl/certs/51391683.0"
	I1018 09:35:48.769735 1485573 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:35:48.775226 1485573 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 09:35:48.775275 1485573 kubeadm.go:400] StartCluster: {Name:auto-275703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-275703 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:35:48.775358 1485573 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:35:48.775414 1485573 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:35:48.820320 1485573 cri.go:89] found id: ""
	I1018 09:35:48.820397 1485573 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:35:48.831316 1485573 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 09:35:48.839703 1485573 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 09:35:48.839769 1485573 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 09:35:48.851357 1485573 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 09:35:48.851376 1485573 kubeadm.go:157] found existing configuration files:
	
	I1018 09:35:48.851431 1485573 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 09:35:48.860105 1485573 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 09:35:48.860171 1485573 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 09:35:48.867428 1485573 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 09:35:48.875889 1485573 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 09:35:48.875950 1485573 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 09:35:48.905267 1485573 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 09:35:48.924480 1485573 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 09:35:48.924564 1485573 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 09:35:48.936569 1485573 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 09:35:48.958253 1485573 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 09:35:48.958354 1485573 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 09:35:48.984053 1485573 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 09:35:49.040738 1485573 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 09:35:49.041146 1485573 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 09:35:49.084198 1485573 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 09:35:49.084332 1485573 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1018 09:35:49.084423 1485573 kubeadm.go:318] OS: Linux
	I1018 09:35:49.084532 1485573 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 09:35:49.084767 1485573 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1018 09:35:49.084829 1485573 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 09:35:49.084883 1485573 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 09:35:49.084964 1485573 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 09:35:49.085021 1485573 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 09:35:49.085071 1485573 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 09:35:49.085125 1485573 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 09:35:49.085177 1485573 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1018 09:35:49.218479 1485573 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 09:35:49.218648 1485573 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 09:35:49.218826 1485573 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 09:35:49.236275 1485573 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 09:35:49.242323 1485573 out.go:252]   - Generating certificates and keys ...
	I1018 09:35:49.242468 1485573 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 09:35:49.242560 1485573 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 09:35:48.581995 1486102 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-593480 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:35:48.596882 1486102 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 09:35:48.600625 1486102 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:35:48.609709 1486102 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-593480 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-593480 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:35:48.609842 1486102 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:35:48.609907 1486102 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:35:48.653903 1486102 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:35:48.653931 1486102 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:35:48.653989 1486102 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:35:48.689641 1486102 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:35:48.689665 1486102 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:35:48.689672 1486102 kubeadm.go:934] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1018 09:35:48.689769 1486102 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-593480 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-593480 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:35:48.689853 1486102 ssh_runner.go:195] Run: crio config
	I1018 09:35:48.764780 1486102 cni.go:84] Creating CNI manager for ""
	I1018 09:35:48.764839 1486102 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:35:48.764882 1486102 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:35:48.764942 1486102 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-593480 NodeName:default-k8s-diff-port-593480 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:35:48.765109 1486102 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-593480"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:35:48.765212 1486102 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:35:48.773966 1486102 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:35:48.774129 1486102 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:35:48.782430 1486102 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1018 09:35:48.795343 1486102 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:35:48.808717 1486102 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1018 09:35:48.826902 1486102 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:35:48.832106 1486102 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:35:48.843135 1486102 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:35:49.008030 1486102 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:35:49.026120 1486102 certs.go:69] Setting up /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480 for IP: 192.168.85.2
	I1018 09:35:49.026138 1486102 certs.go:195] generating shared ca certs ...
	I1018 09:35:49.026154 1486102 certs.go:227] acquiring lock for ca certs: {Name:mk38b7543cc5a8f0209b20a590824c94ad8d0609 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:35:49.026291 1486102 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.key
	I1018 09:35:49.026331 1486102 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.key
	I1018 09:35:49.026337 1486102 certs.go:257] generating profile certs ...
	I1018 09:35:49.026418 1486102 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/client.key
	I1018 09:35:49.026482 1486102 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/apiserver.key.3ec3eca5
	I1018 09:35:49.026519 1486102 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/proxy-client.key
	I1018 09:35:49.026665 1486102 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097.pem (1338 bytes)
	W1018 09:35:49.026693 1486102 certs.go:480] ignoring /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097_empty.pem, impossibly tiny 0 bytes
	I1018 09:35:49.026701 1486102 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:35:49.026726 1486102 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:35:49.026747 1486102 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:35:49.026769 1486102 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/key.pem (1675 bytes)
	I1018 09:35:49.026820 1486102 certs.go:484] found cert: /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem (1708 bytes)
	I1018 09:35:49.027423 1486102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:35:49.067046 1486102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1018 09:35:49.097853 1486102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:35:49.124692 1486102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:35:49.141971 1486102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1018 09:35:49.159544 1486102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 09:35:49.185054 1486102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:35:49.232707 1486102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 09:35:49.305614 1486102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/ssl/certs/12760972.pem --> /usr/share/ca-certificates/12760972.pem (1708 bytes)
	I1018 09:35:49.343654 1486102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:35:49.363811 1486102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21767-1274243/.minikube/certs/1276097.pem --> /usr/share/ca-certificates/1276097.pem (1338 bytes)
	I1018 09:35:49.381044 1486102 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:35:49.393311 1486102 ssh_runner.go:195] Run: openssl version
	I1018 09:35:49.399541 1486102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12760972.pem && ln -fs /usr/share/ca-certificates/12760972.pem /etc/ssl/certs/12760972.pem"
	I1018 09:35:49.407431 1486102 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12760972.pem
	I1018 09:35:49.410912 1486102 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 08:36 /usr/share/ca-certificates/12760972.pem
	I1018 09:35:49.411018 1486102 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12760972.pem
	I1018 09:35:49.462256 1486102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12760972.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:35:49.470075 1486102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:35:49.478031 1486102 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:35:49.485441 1486102 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:30 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:35:49.485554 1486102 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:35:49.526421 1486102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:35:49.535143 1486102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1276097.pem && ln -fs /usr/share/ca-certificates/1276097.pem /etc/ssl/certs/1276097.pem"
	I1018 09:35:49.543584 1486102 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1276097.pem
	I1018 09:35:49.548149 1486102 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 08:36 /usr/share/ca-certificates/1276097.pem
	I1018 09:35:49.548307 1486102 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1276097.pem
	I1018 09:35:49.593000 1486102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1276097.pem /etc/ssl/certs/51391683.0"
	I1018 09:35:49.601462 1486102 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:35:49.605550 1486102 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 09:35:49.650842 1486102 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 09:35:49.701224 1486102 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 09:35:49.743116 1486102 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 09:35:49.805091 1486102 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 09:35:49.892887 1486102 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 09:35:49.980910 1486102 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-593480 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-593480 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:35:49.981053 1486102 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:35:49.981165 1486102 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:35:50.089138 1486102 cri.go:89] found id: "f25f31e0de7b14e6ec30c9543448ad6c36163463aa5bb218aac0f99a95ccfe92"
	I1018 09:35:50.089174 1486102 cri.go:89] found id: "733c7cf0be6cd400ab00223c34a62e45c087d4073aeca1345162c44182d78944"
	I1018 09:35:50.089180 1486102 cri.go:89] found id: ""
	I1018 09:35:50.089265 1486102 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 09:35:50.118623 1486102 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:35:50Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:35:50.118741 1486102 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:35:50.147855 1486102 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 09:35:50.147878 1486102 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 09:35:50.147955 1486102 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 09:35:50.169883 1486102 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:35:50.170362 1486102 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-593480" does not appear in /home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 09:35:50.170515 1486102 kubeconfig.go:62] /home/jenkins/minikube-integration/21767-1274243/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-593480" cluster setting kubeconfig missing "default-k8s-diff-port-593480" context setting]
	I1018 09:35:50.170870 1486102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/kubeconfig: {Name:mk4be33efdd3c492a070116d2c0b6f8b234870aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:35:50.172280 1486102 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 09:35:50.220091 1486102 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1018 09:35:50.220123 1486102 kubeadm.go:601] duration metric: took 72.238577ms to restartPrimaryControlPlane
	I1018 09:35:50.220132 1486102 kubeadm.go:402] duration metric: took 239.242577ms to StartCluster
	I1018 09:35:50.220147 1486102 settings.go:142] acquiring lock: {Name:mk4240955247baece9b9f2d9a5a8e189cb089184 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:35:50.220247 1486102 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 09:35:50.222642 1486102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/kubeconfig: {Name:mk4be33efdd3c492a070116d2c0b6f8b234870aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:35:50.223121 1486102 config.go:182] Loaded profile config "default-k8s-diff-port-593480": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:35:50.223181 1486102 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:35:50.223234 1486102 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:35:50.223482 1486102 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-593480"
	I1018 09:35:50.223506 1486102 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-593480"
	W1018 09:35:50.223518 1486102 addons.go:247] addon storage-provisioner should already be in state true
	I1018 09:35:50.223552 1486102 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-593480"
	I1018 09:35:50.223566 1486102 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-593480"
	W1018 09:35:50.223571 1486102 addons.go:247] addon dashboard should already be in state true
	I1018 09:35:50.223588 1486102 host.go:66] Checking if "default-k8s-diff-port-593480" exists ...
	I1018 09:35:50.224087 1486102 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-593480 --format={{.State.Status}}
	I1018 09:35:50.224256 1486102 host.go:66] Checking if "default-k8s-diff-port-593480" exists ...
	I1018 09:35:50.224741 1486102 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-593480 --format={{.State.Status}}
	I1018 09:35:50.226150 1486102 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-593480"
	I1018 09:35:50.226207 1486102 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-593480"
	I1018 09:35:50.226508 1486102 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-593480 --format={{.State.Status}}
	I1018 09:35:50.229004 1486102 out.go:179] * Verifying Kubernetes components...
	I1018 09:35:50.232431 1486102 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:35:50.283049 1486102 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-593480"
	W1018 09:35:50.283073 1486102 addons.go:247] addon default-storageclass should already be in state true
	I1018 09:35:50.283099 1486102 host.go:66] Checking if "default-k8s-diff-port-593480" exists ...
	I1018 09:35:50.283530 1486102 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-593480 --format={{.State.Status}}
	I1018 09:35:50.285410 1486102 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:35:50.288772 1486102 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 09:35:50.288875 1486102 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:35:50.288885 1486102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:35:50.288953 1486102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-593480
	I1018 09:35:50.299890 1486102 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 09:35:50.302806 1486102 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 09:35:50.302967 1486102 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 09:35:50.303046 1486102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-593480
	I1018 09:35:50.324163 1486102 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:35:50.324188 1486102 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:35:50.324256 1486102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-593480
	I1018 09:35:50.341272 1486102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34921 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/default-k8s-diff-port-593480/id_rsa Username:docker}
	I1018 09:35:50.349534 1486102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34921 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/default-k8s-diff-port-593480/id_rsa Username:docker}
	I1018 09:35:50.371418 1486102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34921 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/default-k8s-diff-port-593480/id_rsa Username:docker}
	I1018 09:35:49.808474 1485573 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 09:35:50.324016 1485573 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 09:35:50.557598 1485573 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 09:35:50.910598 1485573 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 09:35:51.881876 1485573 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 09:35:51.882312 1485573 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [auto-275703 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1018 09:35:52.164643 1485573 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 09:35:52.165094 1485573 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [auto-275703 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1018 09:35:53.366851 1485573 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 09:35:50.634608 1486102 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:35:50.712321 1486102 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-593480" to be "Ready" ...
	I1018 09:35:50.753379 1486102 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 09:35:50.753399 1486102 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 09:35:50.794207 1486102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:35:50.831160 1486102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:35:50.856681 1486102 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 09:35:50.856754 1486102 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 09:35:50.982081 1486102 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 09:35:50.982153 1486102 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 09:35:51.133162 1486102 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 09:35:51.133236 1486102 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 09:35:51.238858 1486102 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 09:35:51.238932 1486102 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 09:35:51.363833 1486102 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 09:35:51.363921 1486102 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 09:35:51.432456 1486102 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 09:35:51.432534 1486102 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 09:35:51.482840 1486102 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 09:35:51.482917 1486102 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 09:35:51.553864 1486102 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:35:51.553939 1486102 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 09:35:51.598936 1486102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:35:55.511368 1485573 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 09:35:55.686518 1485573 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 09:35:55.686600 1485573 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 09:35:55.850898 1485573 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 09:35:56.612198 1485573 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 09:35:57.239264 1485573 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 09:35:57.341424 1485573 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 09:35:57.600192 1485573 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 09:35:57.600300 1485573 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 09:35:57.604236 1485573 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 09:35:57.607913 1485573 out.go:252]   - Booting up control plane ...
	I1018 09:35:57.608027 1485573 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 09:35:57.608117 1485573 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 09:35:57.610568 1485573 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 09:35:57.649863 1485573 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 09:35:57.649988 1485573 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 09:35:57.669807 1485573 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 09:35:57.669911 1485573 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 09:35:57.669953 1485573 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 09:35:57.923294 1485573 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 09:35:57.923419 1485573 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 09:35:57.650038 1486102 node_ready.go:49] node "default-k8s-diff-port-593480" is "Ready"
	I1018 09:35:57.650069 1486102 node_ready.go:38] duration metric: took 6.937669344s for node "default-k8s-diff-port-593480" to be "Ready" ...
	I1018 09:35:57.650082 1486102 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:35:57.650140 1486102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:35:58.126567 1486102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.33227718s)
	I1018 09:36:00.902855 1486102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.071619708s)
	I1018 09:36:00.902976 1486102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.303944964s)
	I1018 09:36:00.903100 1486102 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.252945029s)
	I1018 09:36:00.903120 1486102 api_server.go:72] duration metric: took 10.679912361s to wait for apiserver process to appear ...
	I1018 09:36:00.903126 1486102 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:36:00.903142 1486102 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1018 09:36:00.906363 1486102 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-593480 addons enable metrics-server
	
	I1018 09:36:00.909457 1486102 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1018 09:36:00.444249 1485573 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.519027818s
	I1018 09:36:00.447045 1485573 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 09:36:00.447152 1485573 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1018 09:36:00.447637 1485573 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 09:36:00.448853 1485573 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 09:36:04.570713 1485573 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.121566906s
	I1018 09:36:00.913335 1486102 addons.go:514] duration metric: took 10.690090433s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1018 09:36:00.919865 1486102 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1018 09:36:00.921242 1486102 api_server.go:141] control plane version: v1.34.1
	I1018 09:36:00.921270 1486102 api_server.go:131] duration metric: took 18.137472ms to wait for apiserver health ...
	I1018 09:36:00.921281 1486102 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:36:00.942050 1486102 system_pods.go:59] 8 kube-system pods found
	I1018 09:36:00.942091 1486102 system_pods.go:61] "coredns-66bc5c9577-lxwgf" [7dfe7cc5-827f-4a29-932a-943c05bc729e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:36:00.942101 1486102 system_pods.go:61] "etcd-default-k8s-diff-port-593480" [9c3866d1-8a94-420d-ac51-55bcf2f955e1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:36:00.942114 1486102 system_pods.go:61] "kindnet-ptbw6" [5fa3779f-2d5f-4303-8f6b-af5ae96f1fae] Running
	I1018 09:36:00.942121 1486102 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-593480" [35443851-03a6-43b4-b827-6dcd89b14052] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:36:00.942129 1486102 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-593480" [9e1e3689-cdbf-48cf-8c1e-4a55e905811d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:36:00.942137 1486102 system_pods.go:61] "kube-proxy-lz9p5" [df6ea9c5-3f27-4e58-be1b-c6f47b71aa63] Running
	I1018 09:36:00.942144 1486102 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-593480" [ab2397f7-d4be-4bc7-98eb-b9ddb0e6a9a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:36:00.942154 1486102 system_pods.go:61] "storage-provisioner" [da9f578c-74b8-40c2-a810-245c70e07eae] Running
	I1018 09:36:00.942160 1486102 system_pods.go:74] duration metric: took 20.874216ms to wait for pod list to return data ...
	I1018 09:36:00.942174 1486102 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:36:00.968799 1486102 default_sa.go:45] found service account: "default"
	I1018 09:36:00.968832 1486102 default_sa.go:55] duration metric: took 26.651147ms for default service account to be created ...
	I1018 09:36:00.968843 1486102 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 09:36:00.971717 1486102 system_pods.go:86] 8 kube-system pods found
	I1018 09:36:00.971754 1486102 system_pods.go:89] "coredns-66bc5c9577-lxwgf" [7dfe7cc5-827f-4a29-932a-943c05bc729e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:36:00.971764 1486102 system_pods.go:89] "etcd-default-k8s-diff-port-593480" [9c3866d1-8a94-420d-ac51-55bcf2f955e1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:36:00.971770 1486102 system_pods.go:89] "kindnet-ptbw6" [5fa3779f-2d5f-4303-8f6b-af5ae96f1fae] Running
	I1018 09:36:00.971776 1486102 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-593480" [35443851-03a6-43b4-b827-6dcd89b14052] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:36:00.971785 1486102 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-593480" [9e1e3689-cdbf-48cf-8c1e-4a55e905811d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:36:00.971790 1486102 system_pods.go:89] "kube-proxy-lz9p5" [df6ea9c5-3f27-4e58-be1b-c6f47b71aa63] Running
	I1018 09:36:00.971798 1486102 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-593480" [ab2397f7-d4be-4bc7-98eb-b9ddb0e6a9a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:36:00.971802 1486102 system_pods.go:89] "storage-provisioner" [da9f578c-74b8-40c2-a810-245c70e07eae] Running
	I1018 09:36:00.971808 1486102 system_pods.go:126] duration metric: took 2.960402ms to wait for k8s-apps to be running ...
	I1018 09:36:00.971821 1486102 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 09:36:00.971892 1486102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:36:01.002727 1486102 system_svc.go:56] duration metric: took 30.895336ms WaitForService to wait for kubelet
	I1018 09:36:01.002762 1486102 kubeadm.go:586] duration metric: took 10.779553144s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:36:01.002783 1486102 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:36:01.011023 1486102 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1018 09:36:01.011060 1486102 node_conditions.go:123] node cpu capacity is 2
	I1018 09:36:01.011072 1486102 node_conditions.go:105] duration metric: took 8.2842ms to run NodePressure ...
	I1018 09:36:01.011085 1486102 start.go:241] waiting for startup goroutines ...
	I1018 09:36:01.011093 1486102 start.go:246] waiting for cluster config update ...
	I1018 09:36:01.011103 1486102 start.go:255] writing updated cluster config ...
	I1018 09:36:01.011387 1486102 ssh_runner.go:195] Run: rm -f paused
	I1018 09:36:01.022750 1486102 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:36:01.029988 1486102 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lxwgf" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 09:36:03.037812 1486102 pod_ready.go:104] pod "coredns-66bc5c9577-lxwgf" is not "Ready", error: <nil>
	I1018 09:36:06.762589 1485573 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 6.313214106s
	I1018 09:36:08.953211 1485573 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 8.505133315s
	I1018 09:36:08.983429 1485573 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 09:36:09.005550 1485573 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 09:36:09.026478 1485573 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 09:36:09.026944 1485573 kubeadm.go:318] [mark-control-plane] Marking the node auto-275703 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 09:36:09.050998 1485573 kubeadm.go:318] [bootstrap-token] Using token: c5woyt.2467er8qsdbu8ipv
	I1018 09:36:09.054136 1485573 out.go:252]   - Configuring RBAC rules ...
	I1018 09:36:09.054272 1485573 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 09:36:09.061541 1485573 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 09:36:09.071665 1485573 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 09:36:09.081135 1485573 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 09:36:09.086125 1485573 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 09:36:09.091072 1485573 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 09:36:09.361650 1485573 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 09:36:09.815924 1485573 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 09:36:10.372359 1485573 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 09:36:10.373494 1485573 kubeadm.go:318] 
	I1018 09:36:10.373567 1485573 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 09:36:10.373574 1485573 kubeadm.go:318] 
	I1018 09:36:10.373654 1485573 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 09:36:10.373658 1485573 kubeadm.go:318] 
	I1018 09:36:10.373685 1485573 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 09:36:10.373747 1485573 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 09:36:10.373805 1485573 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 09:36:10.373810 1485573 kubeadm.go:318] 
	I1018 09:36:10.373867 1485573 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 09:36:10.373871 1485573 kubeadm.go:318] 
	I1018 09:36:10.373921 1485573 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 09:36:10.373925 1485573 kubeadm.go:318] 
	I1018 09:36:10.373979 1485573 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 09:36:10.374057 1485573 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 09:36:10.374129 1485573 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 09:36:10.374134 1485573 kubeadm.go:318] 
	I1018 09:36:10.374222 1485573 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 09:36:10.374302 1485573 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 09:36:10.374307 1485573 kubeadm.go:318] 
	I1018 09:36:10.374394 1485573 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token c5woyt.2467er8qsdbu8ipv \
	I1018 09:36:10.374502 1485573 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:96f90b4cb8c73febdc9c05bdeac126d36ffd51807995d3aad044a9ec3e33b953 \
	I1018 09:36:10.374524 1485573 kubeadm.go:318] 	--control-plane 
	I1018 09:36:10.374528 1485573 kubeadm.go:318] 
	I1018 09:36:10.374616 1485573 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 09:36:10.374622 1485573 kubeadm.go:318] 
	I1018 09:36:10.374708 1485573 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token c5woyt.2467er8qsdbu8ipv \
	I1018 09:36:10.374815 1485573 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:96f90b4cb8c73febdc9c05bdeac126d36ffd51807995d3aad044a9ec3e33b953 
	I1018 09:36:10.380564 1485573 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1018 09:36:10.380806 1485573 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1018 09:36:10.380915 1485573 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 09:36:10.380933 1485573 cni.go:84] Creating CNI manager for ""
	I1018 09:36:10.380949 1485573 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:36:10.384305 1485573 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1018 09:36:05.534992 1486102 pod_ready.go:104] pod "coredns-66bc5c9577-lxwgf" is not "Ready", error: <nil>
	W1018 09:36:07.535173 1486102 pod_ready.go:104] pod "coredns-66bc5c9577-lxwgf" is not "Ready", error: <nil>
	W1018 09:36:09.541476 1486102 pod_ready.go:104] pod "coredns-66bc5c9577-lxwgf" is not "Ready", error: <nil>
	I1018 09:36:10.388127 1485573 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 09:36:10.399446 1485573 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 09:36:10.399466 1485573 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 09:36:10.433203 1485573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 09:36:10.928493 1485573 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 09:36:10.928812 1485573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:36:10.928930 1485573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-275703 minikube.k8s.io/updated_at=2025_10_18T09_36_10_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820 minikube.k8s.io/name=auto-275703 minikube.k8s.io/primary=true
	I1018 09:36:11.440359 1485573 ops.go:34] apiserver oom_adj: -16
	I1018 09:36:11.440472 1485573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:36:11.940569 1485573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:36:12.441558 1485573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:36:12.940595 1485573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:36:13.441053 1485573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:36:13.940583 1485573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:36:14.441292 1485573 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:36:14.631272 1485573 kubeadm.go:1113] duration metric: took 3.702526754s to wait for elevateKubeSystemPrivileges
	I1018 09:36:14.631368 1485573 kubeadm.go:402] duration metric: took 25.856085084s to StartCluster
	I1018 09:36:14.631400 1485573 settings.go:142] acquiring lock: {Name:mk4240955247baece9b9f2d9a5a8e189cb089184 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:36:14.631487 1485573 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 09:36:14.632580 1485573 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/kubeconfig: {Name:mk4be33efdd3c492a070116d2c0b6f8b234870aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:36:14.632864 1485573 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:36:14.633002 1485573 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 09:36:14.633308 1485573 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:36:14.633395 1485573 addons.go:69] Setting storage-provisioner=true in profile "auto-275703"
	I1018 09:36:14.633412 1485573 addons.go:238] Setting addon storage-provisioner=true in "auto-275703"
	I1018 09:36:14.633435 1485573 host.go:66] Checking if "auto-275703" exists ...
	I1018 09:36:14.633991 1485573 cli_runner.go:164] Run: docker container inspect auto-275703 --format={{.State.Status}}
	I1018 09:36:14.634286 1485573 config.go:182] Loaded profile config "auto-275703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:36:14.634418 1485573 addons.go:69] Setting default-storageclass=true in profile "auto-275703"
	I1018 09:36:14.634451 1485573 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-275703"
	I1018 09:36:14.634777 1485573 cli_runner.go:164] Run: docker container inspect auto-275703 --format={{.State.Status}}
	I1018 09:36:14.638634 1485573 out.go:179] * Verifying Kubernetes components...
	I1018 09:36:14.642374 1485573 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:36:14.672019 1485573 addons.go:238] Setting addon default-storageclass=true in "auto-275703"
	I1018 09:36:14.672054 1485573 host.go:66] Checking if "auto-275703" exists ...
	I1018 09:36:14.672455 1485573 cli_runner.go:164] Run: docker container inspect auto-275703 --format={{.State.Status}}
	I1018 09:36:14.689303 1485573 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1018 09:36:12.036542 1486102 pod_ready.go:104] pod "coredns-66bc5c9577-lxwgf" is not "Ready", error: <nil>
	W1018 09:36:14.037408 1486102 pod_ready.go:104] pod "coredns-66bc5c9577-lxwgf" is not "Ready", error: <nil>
	I1018 09:36:14.710275 1485573 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:36:14.710293 1485573 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:36:14.710354 1485573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-275703
	I1018 09:36:14.710546 1485573 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:36:14.710555 1485573 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:36:14.710609 1485573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-275703
	I1018 09:36:14.747012 1485573 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34916 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/auto-275703/id_rsa Username:docker}
	I1018 09:36:14.758438 1485573 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34916 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/auto-275703/id_rsa Username:docker}
	I1018 09:36:15.173737 1485573 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:36:15.282227 1485573 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:36:15.400709 1485573 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 09:36:15.400822 1485573 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:36:16.321101 1485573 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.038840499s)
	I1018 09:36:16.321898 1485573 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1018 09:36:16.322105 1485573 node_ready.go:35] waiting up to 15m0s for node "auto-275703" to be "Ready" ...
	I1018 09:36:16.322864 1485573 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.149100652s)
	I1018 09:36:16.406369 1485573 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1018 09:36:16.411616 1485573 addons.go:514] duration metric: took 1.77828891s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 09:36:16.826956 1485573 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-275703" context rescaled to 1 replicas
	W1018 09:36:18.325461 1485573 node_ready.go:57] node "auto-275703" has "Ready":"False" status (will retry)
	W1018 09:36:16.042868 1486102 pod_ready.go:104] pod "coredns-66bc5c9577-lxwgf" is not "Ready", error: <nil>
	W1018 09:36:18.536398 1486102 pod_ready.go:104] pod "coredns-66bc5c9577-lxwgf" is not "Ready", error: <nil>
	W1018 09:36:20.326235 1485573 node_ready.go:57] node "auto-275703" has "Ready":"False" status (will retry)
	W1018 09:36:22.825874 1485573 node_ready.go:57] node "auto-275703" has "Ready":"False" status (will retry)
	W1018 09:36:21.035140 1486102 pod_ready.go:104] pod "coredns-66bc5c9577-lxwgf" is not "Ready", error: <nil>
	W1018 09:36:23.535798 1486102 pod_ready.go:104] pod "coredns-66bc5c9577-lxwgf" is not "Ready", error: <nil>
	W1018 09:36:25.325505 1485573 node_ready.go:57] node "auto-275703" has "Ready":"False" status (will retry)
	W1018 09:36:27.825388 1485573 node_ready.go:57] node "auto-275703" has "Ready":"False" status (will retry)
	W1018 09:36:25.536067 1486102 pod_ready.go:104] pod "coredns-66bc5c9577-lxwgf" is not "Ready", error: <nil>
	W1018 09:36:28.035914 1486102 pod_ready.go:104] pod "coredns-66bc5c9577-lxwgf" is not "Ready", error: <nil>
	W1018 09:36:30.037485 1486102 pod_ready.go:104] pod "coredns-66bc5c9577-lxwgf" is not "Ready", error: <nil>
	W1018 09:36:30.325353 1485573 node_ready.go:57] node "auto-275703" has "Ready":"False" status (will retry)
	W1018 09:36:32.825103 1485573 node_ready.go:57] node "auto-275703" has "Ready":"False" status (will retry)
	W1018 09:36:32.535190 1486102 pod_ready.go:104] pod "coredns-66bc5c9577-lxwgf" is not "Ready", error: <nil>
	W1018 09:36:34.537062 1486102 pod_ready.go:104] pod "coredns-66bc5c9577-lxwgf" is not "Ready", error: <nil>
	W1018 09:36:34.825213 1485573 node_ready.go:57] node "auto-275703" has "Ready":"False" status (will retry)
	W1018 09:36:37.324968 1485573 node_ready.go:57] node "auto-275703" has "Ready":"False" status (will retry)
	W1018 09:36:39.325119 1485573 node_ready.go:57] node "auto-275703" has "Ready":"False" status (will retry)
	W1018 09:36:37.036358 1486102 pod_ready.go:104] pod "coredns-66bc5c9577-lxwgf" is not "Ready", error: <nil>
	W1018 09:36:39.535359 1486102 pod_ready.go:104] pod "coredns-66bc5c9577-lxwgf" is not "Ready", error: <nil>
	I1018 09:36:41.035546 1486102 pod_ready.go:94] pod "coredns-66bc5c9577-lxwgf" is "Ready"
	I1018 09:36:41.035570 1486102 pod_ready.go:86] duration metric: took 40.005550821s for pod "coredns-66bc5c9577-lxwgf" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:36:41.039128 1486102 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-593480" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:36:41.044774 1486102 pod_ready.go:94] pod "etcd-default-k8s-diff-port-593480" is "Ready"
	I1018 09:36:41.044798 1486102 pod_ready.go:86] duration metric: took 5.648835ms for pod "etcd-default-k8s-diff-port-593480" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:36:41.047041 1486102 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-593480" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:36:41.054309 1486102 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-593480" is "Ready"
	I1018 09:36:41.054334 1486102 pod_ready.go:86] duration metric: took 7.226433ms for pod "kube-apiserver-default-k8s-diff-port-593480" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:36:41.056530 1486102 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-593480" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:36:41.233768 1486102 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-593480" is "Ready"
	I1018 09:36:41.233855 1486102 pod_ready.go:86] duration metric: took 177.302903ms for pod "kube-controller-manager-default-k8s-diff-port-593480" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:36:41.434101 1486102 pod_ready.go:83] waiting for pod "kube-proxy-lz9p5" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:36:41.834029 1486102 pod_ready.go:94] pod "kube-proxy-lz9p5" is "Ready"
	I1018 09:36:41.834066 1486102 pod_ready.go:86] duration metric: took 399.937557ms for pod "kube-proxy-lz9p5" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:36:42.034591 1486102 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-593480" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:36:42.434257 1486102 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-593480" is "Ready"
	I1018 09:36:42.434282 1486102 pod_ready.go:86] duration metric: took 399.656679ms for pod "kube-scheduler-default-k8s-diff-port-593480" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:36:42.434295 1486102 pod_ready.go:40] duration metric: took 41.411509734s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:36:42.490888 1486102 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1018 09:36:42.494220 1486102 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-593480" cluster and "default" namespace by default
	W1018 09:36:41.825190 1485573 node_ready.go:57] node "auto-275703" has "Ready":"False" status (will retry)
	W1018 09:36:43.826909 1485573 node_ready.go:57] node "auto-275703" has "Ready":"False" status (will retry)
	W1018 09:36:46.325026 1485573 node_ready.go:57] node "auto-275703" has "Ready":"False" status (will retry)
	W1018 09:36:48.325210 1485573 node_ready.go:57] node "auto-275703" has "Ready":"False" status (will retry)
	W1018 09:36:50.824939 1485573 node_ready.go:57] node "auto-275703" has "Ready":"False" status (will retry)
	W1018 09:36:52.825563 1485573 node_ready.go:57] node "auto-275703" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 18 09:36:36 default-k8s-diff-port-593480 crio[650]: time="2025-10-18T09:36:36.344870998Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:36:36 default-k8s-diff-port-593480 crio[650]: time="2025-10-18T09:36:36.352008555Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:36:36 default-k8s-diff-port-593480 crio[650]: time="2025-10-18T09:36:36.3525652Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:36:36 default-k8s-diff-port-593480 crio[650]: time="2025-10-18T09:36:36.369481159Z" level=info msg="Created container c86fad3e7dc491c95dc3fffaf85c7f92e7712435dfa9dc8dc3aad44d435ee3b6: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f47cm/dashboard-metrics-scraper" id=c88f2faa-3e41-4153-8914-a18d63994776 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:36:36 default-k8s-diff-port-593480 crio[650]: time="2025-10-18T09:36:36.370641028Z" level=info msg="Starting container: c86fad3e7dc491c95dc3fffaf85c7f92e7712435dfa9dc8dc3aad44d435ee3b6" id=85c6660b-d7df-4f8a-8bcb-75facb612c25 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:36:36 default-k8s-diff-port-593480 crio[650]: time="2025-10-18T09:36:36.373412627Z" level=info msg="Started container" PID=1643 containerID=c86fad3e7dc491c95dc3fffaf85c7f92e7712435dfa9dc8dc3aad44d435ee3b6 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f47cm/dashboard-metrics-scraper id=85c6660b-d7df-4f8a-8bcb-75facb612c25 name=/runtime.v1.RuntimeService/StartContainer sandboxID=06ce6b13e3aee14b3382a8fbc4e4759a9c4dadba8ab6952b80f633fac4f0a880
	Oct 18 09:36:36 default-k8s-diff-port-593480 conmon[1641]: conmon c86fad3e7dc491c95dc3 <ninfo>: container 1643 exited with status 1
	Oct 18 09:36:36 default-k8s-diff-port-593480 crio[650]: time="2025-10-18T09:36:36.706984609Z" level=info msg="Removing container: 043b18b289d7f25a1f8378eaab0467b83af90a4ac0b616437ecb2d0bc1397924" id=4e4056a6-e726-4002-bf6e-7fc1d2855f42 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:36:36 default-k8s-diff-port-593480 crio[650]: time="2025-10-18T09:36:36.719697444Z" level=info msg="Error loading conmon cgroup of container 043b18b289d7f25a1f8378eaab0467b83af90a4ac0b616437ecb2d0bc1397924: cgroup deleted" id=4e4056a6-e726-4002-bf6e-7fc1d2855f42 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:36:36 default-k8s-diff-port-593480 crio[650]: time="2025-10-18T09:36:36.726015717Z" level=info msg="Removed container 043b18b289d7f25a1f8378eaab0467b83af90a4ac0b616437ecb2d0bc1397924: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f47cm/dashboard-metrics-scraper" id=4e4056a6-e726-4002-bf6e-7fc1d2855f42 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:36:39 default-k8s-diff-port-593480 crio[650]: time="2025-10-18T09:36:39.985856613Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:36:39 default-k8s-diff-port-593480 crio[650]: time="2025-10-18T09:36:39.989439225Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:36:39 default-k8s-diff-port-593480 crio[650]: time="2025-10-18T09:36:39.989471118Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:36:39 default-k8s-diff-port-593480 crio[650]: time="2025-10-18T09:36:39.989493895Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:36:39 default-k8s-diff-port-593480 crio[650]: time="2025-10-18T09:36:39.99267468Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:36:39 default-k8s-diff-port-593480 crio[650]: time="2025-10-18T09:36:39.992708705Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:36:39 default-k8s-diff-port-593480 crio[650]: time="2025-10-18T09:36:39.99273323Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:36:39 default-k8s-diff-port-593480 crio[650]: time="2025-10-18T09:36:39.996309089Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:36:39 default-k8s-diff-port-593480 crio[650]: time="2025-10-18T09:36:39.996342992Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:36:39 default-k8s-diff-port-593480 crio[650]: time="2025-10-18T09:36:39.99637079Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:36:39 default-k8s-diff-port-593480 crio[650]: time="2025-10-18T09:36:39.999413454Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:36:39 default-k8s-diff-port-593480 crio[650]: time="2025-10-18T09:36:39.999565507Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:36:39 default-k8s-diff-port-593480 crio[650]: time="2025-10-18T09:36:39.9996501Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:36:40 default-k8s-diff-port-593480 crio[650]: time="2025-10-18T09:36:40.003697421Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:36:40 default-k8s-diff-port-593480 crio[650]: time="2025-10-18T09:36:40.003999664Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	c86fad3e7dc49       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           23 seconds ago       Exited              dashboard-metrics-scraper   2                   06ce6b13e3aee       dashboard-metrics-scraper-6ffb444bf9-f47cm             kubernetes-dashboard
	77edd5912d990       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           28 seconds ago       Running             storage-provisioner         2                   dd2c354cde8fb       storage-provisioner                                    kube-system
	972710a1973b9       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   49 seconds ago       Running             kubernetes-dashboard        0                   cbdce97f3968d       kubernetes-dashboard-855c9754f9-b2xsq                  kubernetes-dashboard
	f4f8772e3187d       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           About a minute ago   Running             busybox                     1                   a78ed3c99378b       busybox                                                default
	e96bf01e397dc       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           About a minute ago   Running             coredns                     1                   e9c9002f69570       coredns-66bc5c9577-lxwgf                               kube-system
	7f7413ae9355d       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           About a minute ago   Running             kube-proxy                  1                   6e7b468a93731       kube-proxy-lz9p5                                       kube-system
	219038721a043       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           About a minute ago   Exited              storage-provisioner         1                   dd2c354cde8fb       storage-provisioner                                    kube-system
	40b8d46047871       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           About a minute ago   Running             kindnet-cni                 1                   3436b454fff20       kindnet-ptbw6                                          kube-system
	a2ae42e7111f6       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   3a754f6cfd109       kube-apiserver-default-k8s-diff-port-593480            kube-system
	52621647e0872       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   da3cad0a5e31a       etcd-default-k8s-diff-port-593480                      kube-system
	f25f31e0de7b1       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   353065344be32       kube-scheduler-default-k8s-diff-port-593480            kube-system
	733c7cf0be6cd       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   df40fc36d9569       kube-controller-manager-default-k8s-diff-port-593480   kube-system
	
	
	==> coredns [e96bf01e397dc74fec93b72b52bc80ee6fe7bddee4e09809a7a655beb5a2e18a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53589 - 36133 "HINFO IN 6823403413020491029.5837726841550140400. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013932338s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-593480
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-593480
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2a39cecdc22b5fb611b15c7501c7459c3b4d2820
	                    minikube.k8s.io/name=default-k8s-diff-port-593480
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_34_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:34:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-593480
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:36:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:36:28 +0000   Sat, 18 Oct 2025 09:34:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:36:28 +0000   Sat, 18 Oct 2025 09:34:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:36:28 +0000   Sat, 18 Oct 2025 09:34:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:36:28 +0000   Sat, 18 Oct 2025 09:35:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-593480
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                49945b4a-cdd7-400f-9239-4b91af7db42e
	  Boot ID:                    65523ab2-bf15-4ba4-9086-da57024e96a9
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 coredns-66bc5c9577-lxwgf                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m31s
	  kube-system                 etcd-default-k8s-diff-port-593480                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m36s
	  kube-system                 kindnet-ptbw6                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m32s
	  kube-system                 kube-apiserver-default-k8s-diff-port-593480             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m36s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-593480    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m36s
	  kube-system                 kube-proxy-lz9p5                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 kube-scheduler-default-k8s-diff-port-593480             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m36s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-f47cm              0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-b2xsq                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m29s                  kube-proxy       
	  Normal   Starting                 58s                    kube-proxy       
	  Warning  CgroupV1                 2m46s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m46s (x8 over 2m46s)  kubelet          Node default-k8s-diff-port-593480 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m46s (x8 over 2m46s)  kubelet          Node default-k8s-diff-port-593480 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m46s (x8 over 2m46s)  kubelet          Node default-k8s-diff-port-593480 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m36s                  kubelet          Node default-k8s-diff-port-593480 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 2m36s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m36s                  kubelet          Node default-k8s-diff-port-593480 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m36s                  kubelet          Node default-k8s-diff-port-593480 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m36s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m32s                  node-controller  Node default-k8s-diff-port-593480 event: Registered Node default-k8s-diff-port-593480 in Controller
	  Normal   NodeReady                109s                   kubelet          Node default-k8s-diff-port-593480 status is now: NodeReady
	  Normal   Starting                 70s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 70s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  70s (x8 over 70s)      kubelet          Node default-k8s-diff-port-593480 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    70s (x8 over 70s)      kubelet          Node default-k8s-diff-port-593480 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     70s (x8 over 70s)      kubelet          Node default-k8s-diff-port-593480 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s                    node-controller  Node default-k8s-diff-port-593480 event: Registered Node default-k8s-diff-port-593480 in Controller
	
	
	==> dmesg <==
	[ +29.155522] overlayfs: idmapped layers are currently not supported
	[Oct18 09:15] overlayfs: idmapped layers are currently not supported
	[ +11.661984] kauditd_printk_skb: 8 callbacks suppressed
	[ +12.494912] overlayfs: idmapped layers are currently not supported
	[Oct18 09:16] overlayfs: idmapped layers are currently not supported
	[Oct18 09:18] overlayfs: idmapped layers are currently not supported
	[Oct18 09:20] overlayfs: idmapped layers are currently not supported
	[  +1.396393] overlayfs: idmapped layers are currently not supported
	[Oct18 09:22] overlayfs: idmapped layers are currently not supported
	[ +27.782946] overlayfs: idmapped layers are currently not supported
	[Oct18 09:25] overlayfs: idmapped layers are currently not supported
	[Oct18 09:26] overlayfs: idmapped layers are currently not supported
	[Oct18 09:27] overlayfs: idmapped layers are currently not supported
	[Oct18 09:28] overlayfs: idmapped layers are currently not supported
	[ +34.309418] overlayfs: idmapped layers are currently not supported
	[Oct18 09:30] overlayfs: idmapped layers are currently not supported
	[Oct18 09:31] overlayfs: idmapped layers are currently not supported
	[ +30.479187] overlayfs: idmapped layers are currently not supported
	[Oct18 09:32] overlayfs: idmapped layers are currently not supported
	[Oct18 09:33] overlayfs: idmapped layers are currently not supported
	[Oct18 09:34] overlayfs: idmapped layers are currently not supported
	[ +34.458375] overlayfs: idmapped layers are currently not supported
	[Oct18 09:35] overlayfs: idmapped layers are currently not supported
	[ +33.991180] overlayfs: idmapped layers are currently not supported
	[Oct18 09:36] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [52621647e0872882c5501e4bb01f9aa34bd6d544528f4617f5c91ad85298df0c] <==
	{"level":"warn","ts":"2025-10-18T09:35:54.953982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:55.016014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:55.057727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:55.164262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:55.220317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:55.272835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:55.306096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:55.340660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:55.365609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:55.429826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:55.509890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:55.510982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:55.562582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:55.584447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:55.635924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:55.674255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:55.732429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:55.775975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:55.800471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:55.836775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:55.884542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:55.902454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:55.947330Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:55.956900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:35:56.130621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36990","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:36:59 up 11:19,  0 user,  load average: 3.46, 3.54, 2.90
	Linux default-k8s-diff-port-593480 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [40b8d460478714d481d2976c4d0eab5fc8a6be7829e3e42b66c70ad0ca58af09] <==
	I1018 09:35:59.576785       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:35:59.578944       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1018 09:35:59.579109       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:35:59.579122       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:35:59.579134       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:35:59Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:35:59.985256       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:35:59.985284       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:35:59.985293       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:36:00.016340       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1018 09:36:29.985252       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1018 09:36:30.017419       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1018 09:36:30.021444       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 09:36:30.021550       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1018 09:36:31.507928       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:36:31.507991       1 metrics.go:72] Registering metrics
	I1018 09:36:31.508053       1 controller.go:711] "Syncing nftables rules"
	I1018 09:36:39.985533       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 09:36:39.985588       1 main.go:301] handling current node
	I1018 09:36:49.984856       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 09:36:49.984892       1 main.go:301] handling current node
	I1018 09:36:59.985403       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1018 09:36:59.985432       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a2ae42e7111f68e250d80963ab8db67a0cbd21a5286168c732b5ae60441c17b7] <==
	I1018 09:35:57.710024       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 09:35:57.710070       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1018 09:35:57.710145       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 09:35:57.710304       1 aggregator.go:171] initial CRD sync complete...
	I1018 09:35:57.710315       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 09:35:57.710321       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 09:35:57.710326       1 cache.go:39] Caches are synced for autoregister controller
	I1018 09:35:57.710546       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 09:35:57.731115       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 09:35:57.731180       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 09:35:57.749807       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 09:35:57.749835       1 policy_source.go:240] refreshing policies
	E1018 09:35:57.781670       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 09:35:57.797802       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:35:58.335866       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 09:35:58.358465       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:36:00.411649       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 09:36:00.613543       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 09:36:00.685730       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:36:00.701585       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:36:00.817372       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.247.219"}
	I1018 09:36:00.838773       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.39.252"}
	I1018 09:36:02.480700       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 09:36:02.674811       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 09:36:02.729567       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [733c7cf0be6cd400ab00223c34a62e45c087d4073aeca1345162c44182d78944] <==
	I1018 09:36:02.272355       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 09:36:02.272369       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 09:36:02.279889       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 09:36:02.291266       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:36:02.294075       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 09:36:02.294212       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 09:36:02.294296       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 09:36:02.294337       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 09:36:02.294365       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 09:36:02.295494       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 09:36:02.296919       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1018 09:36:02.297244       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:36:02.303879       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 09:36:02.304025       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 09:36:02.304119       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-593480"
	I1018 09:36:02.304202       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 09:36:02.304714       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 09:36:02.313019       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 09:36:02.325601       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 09:36:02.325917       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 09:36:02.333332       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 09:36:02.334497       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 09:36:02.334541       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 09:36:02.340756       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:36:02.341867       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	
	
	==> kube-proxy [7f7413ae9355d190c2c94e35f835f62c3b9bfedcd668a89a6a63bee7beadb8e8] <==
	I1018 09:36:00.626948       1 server_linux.go:53] "Using iptables proxy"
	I1018 09:36:00.801215       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 09:36:00.908945       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:36:00.918867       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1018 09:36:00.919050       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:36:01.050343       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:36:01.050415       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:36:01.072119       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:36:01.072468       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:36:01.072490       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:36:01.080629       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:36:01.080706       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:36:01.081058       1 config.go:200] "Starting service config controller"
	I1018 09:36:01.081121       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:36:01.081625       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:36:01.082493       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:36:01.083424       1 config.go:309] "Starting node config controller"
	I1018 09:36:01.083489       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:36:01.083521       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 09:36:01.181368       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 09:36:01.181506       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 09:36:01.183056       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [f25f31e0de7b14e6ec30c9543448ad6c36163463aa5bb218aac0f99a95ccfe92] <==
	I1018 09:35:55.839764       1 serving.go:386] Generated self-signed cert in-memory
	I1018 09:36:00.711054       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 09:36:00.711246       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:36:00.731048       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1018 09:36:00.731099       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1018 09:36:00.731125       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 09:36:00.731150       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:36:00.731169       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:36:00.731183       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 09:36:00.731187       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 09:36:00.731191       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 09:36:00.831697       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 09:36:00.831814       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1018 09:36:00.831832       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:36:03 default-k8s-diff-port-593480 kubelet[777]: I1018 09:36:03.015962     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbdc7\" (UniqueName: \"kubernetes.io/projected/b6bcdba7-3aa5-4913-b828-bba9ad382a0a-kube-api-access-lbdc7\") pod \"kubernetes-dashboard-855c9754f9-b2xsq\" (UID: \"b6bcdba7-3aa5-4913-b828-bba9ad382a0a\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-b2xsq"
	Oct 18 09:36:03 default-k8s-diff-port-593480 kubelet[777]: I1018 09:36:03.016646     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/181e6493-517e-4171-abff-1268e0723fd4-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-f47cm\" (UID: \"181e6493-517e-4171-abff-1268e0723fd4\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f47cm"
	Oct 18 09:36:03 default-k8s-diff-port-593480 kubelet[777]: I1018 09:36:03.016824     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfgwg\" (UniqueName: \"kubernetes.io/projected/181e6493-517e-4171-abff-1268e0723fd4-kube-api-access-tfgwg\") pod \"dashboard-metrics-scraper-6ffb444bf9-f47cm\" (UID: \"181e6493-517e-4171-abff-1268e0723fd4\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f47cm"
	Oct 18 09:36:03 default-k8s-diff-port-593480 kubelet[777]: I1018 09:36:03.016962     777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b6bcdba7-3aa5-4913-b828-bba9ad382a0a-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-b2xsq\" (UID: \"b6bcdba7-3aa5-4913-b828-bba9ad382a0a\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-b2xsq"
	Oct 18 09:36:03 default-k8s-diff-port-593480 kubelet[777]: W1018 09:36:03.220497     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/bfa509b1b05342b217b1ce963a2db530ef024766b92757db9c1cb5a7153e9679/crio-cbdce97f3968d5cc491fa74cf0711ccb25fa161d09a448c85763e3a6cbe07fd1 WatchSource:0}: Error finding container cbdce97f3968d5cc491fa74cf0711ccb25fa161d09a448c85763e3a6cbe07fd1: Status 404 returned error can't find the container with id cbdce97f3968d5cc491fa74cf0711ccb25fa161d09a448c85763e3a6cbe07fd1
	Oct 18 09:36:03 default-k8s-diff-port-593480 kubelet[777]: W1018 09:36:03.238726     777 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/bfa509b1b05342b217b1ce963a2db530ef024766b92757db9c1cb5a7153e9679/crio-06ce6b13e3aee14b3382a8fbc4e4759a9c4dadba8ab6952b80f633fac4f0a880 WatchSource:0}: Error finding container 06ce6b13e3aee14b3382a8fbc4e4759a9c4dadba8ab6952b80f633fac4f0a880: Status 404 returned error can't find the container with id 06ce6b13e3aee14b3382a8fbc4e4759a9c4dadba8ab6952b80f633fac4f0a880
	Oct 18 09:36:16 default-k8s-diff-port-593480 kubelet[777]: I1018 09:36:16.650428     777 scope.go:117] "RemoveContainer" containerID="9e628c53e81d3a329adfe75e4720fcdcf60f2bfb241bf5eb77346eadefb46a4d"
	Oct 18 09:36:16 default-k8s-diff-port-593480 kubelet[777]: I1018 09:36:16.681463     777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-b2xsq" podStartSLOduration=7.582776215 podStartE2EDuration="14.681446392s" podCreationTimestamp="2025-10-18 09:36:02 +0000 UTC" firstStartedPulling="2025-10-18 09:36:03.224560795 +0000 UTC m=+14.192936861" lastFinishedPulling="2025-10-18 09:36:10.323230972 +0000 UTC m=+21.291607038" observedRunningTime="2025-10-18 09:36:10.660726546 +0000 UTC m=+21.629102629" watchObservedRunningTime="2025-10-18 09:36:16.681446392 +0000 UTC m=+27.649822459"
	Oct 18 09:36:17 default-k8s-diff-port-593480 kubelet[777]: I1018 09:36:17.654521     777 scope.go:117] "RemoveContainer" containerID="9e628c53e81d3a329adfe75e4720fcdcf60f2bfb241bf5eb77346eadefb46a4d"
	Oct 18 09:36:17 default-k8s-diff-port-593480 kubelet[777]: I1018 09:36:17.654828     777 scope.go:117] "RemoveContainer" containerID="043b18b289d7f25a1f8378eaab0467b83af90a4ac0b616437ecb2d0bc1397924"
	Oct 18 09:36:17 default-k8s-diff-port-593480 kubelet[777]: E1018 09:36:17.654969     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-f47cm_kubernetes-dashboard(181e6493-517e-4171-abff-1268e0723fd4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f47cm" podUID="181e6493-517e-4171-abff-1268e0723fd4"
	Oct 18 09:36:18 default-k8s-diff-port-593480 kubelet[777]: I1018 09:36:18.658629     777 scope.go:117] "RemoveContainer" containerID="043b18b289d7f25a1f8378eaab0467b83af90a4ac0b616437ecb2d0bc1397924"
	Oct 18 09:36:18 default-k8s-diff-port-593480 kubelet[777]: E1018 09:36:18.658795     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-f47cm_kubernetes-dashboard(181e6493-517e-4171-abff-1268e0723fd4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f47cm" podUID="181e6493-517e-4171-abff-1268e0723fd4"
	Oct 18 09:36:23 default-k8s-diff-port-593480 kubelet[777]: I1018 09:36:23.172830     777 scope.go:117] "RemoveContainer" containerID="043b18b289d7f25a1f8378eaab0467b83af90a4ac0b616437ecb2d0bc1397924"
	Oct 18 09:36:23 default-k8s-diff-port-593480 kubelet[777]: E1018 09:36:23.173043     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-f47cm_kubernetes-dashboard(181e6493-517e-4171-abff-1268e0723fd4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f47cm" podUID="181e6493-517e-4171-abff-1268e0723fd4"
	Oct 18 09:36:30 default-k8s-diff-port-593480 kubelet[777]: I1018 09:36:30.688686     777 scope.go:117] "RemoveContainer" containerID="219038721a04310118069d66f5e074f6d504bd7804e061291016a223d0b92b7c"
	Oct 18 09:36:36 default-k8s-diff-port-593480 kubelet[777]: I1018 09:36:36.341809     777 scope.go:117] "RemoveContainer" containerID="043b18b289d7f25a1f8378eaab0467b83af90a4ac0b616437ecb2d0bc1397924"
	Oct 18 09:36:36 default-k8s-diff-port-593480 kubelet[777]: I1018 09:36:36.705353     777 scope.go:117] "RemoveContainer" containerID="043b18b289d7f25a1f8378eaab0467b83af90a4ac0b616437ecb2d0bc1397924"
	Oct 18 09:36:36 default-k8s-diff-port-593480 kubelet[777]: I1018 09:36:36.705719     777 scope.go:117] "RemoveContainer" containerID="c86fad3e7dc491c95dc3fffaf85c7f92e7712435dfa9dc8dc3aad44d435ee3b6"
	Oct 18 09:36:36 default-k8s-diff-port-593480 kubelet[777]: E1018 09:36:36.705967     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-f47cm_kubernetes-dashboard(181e6493-517e-4171-abff-1268e0723fd4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f47cm" podUID="181e6493-517e-4171-abff-1268e0723fd4"
	Oct 18 09:36:43 default-k8s-diff-port-593480 kubelet[777]: I1018 09:36:43.172638     777 scope.go:117] "RemoveContainer" containerID="c86fad3e7dc491c95dc3fffaf85c7f92e7712435dfa9dc8dc3aad44d435ee3b6"
	Oct 18 09:36:43 default-k8s-diff-port-593480 kubelet[777]: E1018 09:36:43.173294     777 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-f47cm_kubernetes-dashboard(181e6493-517e-4171-abff-1268e0723fd4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-f47cm" podUID="181e6493-517e-4171-abff-1268e0723fd4"
	Oct 18 09:36:54 default-k8s-diff-port-593480 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 09:36:54 default-k8s-diff-port-593480 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 09:36:54 default-k8s-diff-port-593480 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [972710a1973b9cf8acbd41550a4a3ebfb5ec96b320e8f9397a2deaf9b46c3e0c] <==
	2025/10/18 09:36:10 Using namespace: kubernetes-dashboard
	2025/10/18 09:36:10 Using in-cluster config to connect to apiserver
	2025/10/18 09:36:10 Using secret token for csrf signing
	2025/10/18 09:36:10 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 09:36:10 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 09:36:10 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 09:36:10 Generating JWE encryption key
	2025/10/18 09:36:10 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 09:36:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 09:36:11 Initializing JWE encryption key from synchronized object
	2025/10/18 09:36:11 Creating in-cluster Sidecar client
	2025/10/18 09:36:11 Serving insecurely on HTTP port: 9090
	2025/10/18 09:36:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 09:36:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 09:36:10 Starting overwatch
	
	
	==> storage-provisioner [219038721a04310118069d66f5e074f6d504bd7804e061291016a223d0b92b7c] <==
	I1018 09:36:00.511740       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 09:36:30.514733       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [77edd5912d990436794cf936b8f51159dc4b9c1c9baaa23fc03d051c5c9c7c44] <==
	W1018 09:36:30.757571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:34.212614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:38.472599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:42.071481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:45.125590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:48.147649       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:48.155485       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:36:48.155624       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 09:36:48.155801       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-593480_eb646e8d-a990-4470-8e0d-5e776b980fbc!
	I1018 09:36:48.155914       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"75bb76a9-c543-40fa-ba6e-108e81012c94", APIVersion:"v1", ResourceVersion:"689", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-593480_eb646e8d-a990-4470-8e0d-5e776b980fbc became leader
	W1018 09:36:48.165878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:48.169351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:36:48.256816       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-593480_eb646e8d-a990-4470-8e0d-5e776b980fbc!
	W1018 09:36:50.172673       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:50.180529       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:52.183726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:52.187925       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:54.191250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:54.197979       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:56.200698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:56.206476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:58.210084       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:36:58.224135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:37:00.286466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:37:00.333886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-593480 -n default-k8s-diff-port-593480
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-593480 -n default-k8s-diff-port-593480: exit status 2 (533.912823ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-593480 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (7.22s)
E1018 09:42:43.343403 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (260/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.31
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 5.45
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.11
18 TestDownloadOnly/v1.34.1/DeleteAll 0.21
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.61
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 175.25
31 TestAddons/serial/GCPAuth/Namespaces 0.21
32 TestAddons/serial/GCPAuth/FakeCredentials 8.84
48 TestAddons/StoppedEnableDisable 12.41
49 TestCertOptions 39.7
50 TestCertExpiration 254.03
52 TestForceSystemdFlag 35.71
53 TestForceSystemdEnv 32.58
59 TestErrorSpam/setup 32.9
60 TestErrorSpam/start 0.75
61 TestErrorSpam/status 1.07
62 TestErrorSpam/pause 6.69
63 TestErrorSpam/unpause 5.45
64 TestErrorSpam/stop 1.5
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 76.74
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 27.96
71 TestFunctional/serial/KubeContext 0.06
72 TestFunctional/serial/KubectlGetPods 0.1
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.51
76 TestFunctional/serial/CacheCmd/cache/add_local 1.21
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.84
81 TestFunctional/serial/CacheCmd/cache/delete 0.12
82 TestFunctional/serial/MinikubeKubectlCmd 0.14
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
84 TestFunctional/serial/ExtraConfig 42.19
85 TestFunctional/serial/ComponentHealth 0.09
86 TestFunctional/serial/LogsCmd 1.55
87 TestFunctional/serial/LogsFileCmd 1.58
88 TestFunctional/serial/InvalidService 4.22
90 TestFunctional/parallel/ConfigCmd 0.48
91 TestFunctional/parallel/DashboardCmd 11.02
92 TestFunctional/parallel/DryRun 0.57
93 TestFunctional/parallel/InternationalLanguage 0.28
94 TestFunctional/parallel/StatusCmd 1.34
99 TestFunctional/parallel/AddonsCmd 0.19
100 TestFunctional/parallel/PersistentVolumeClaim 23.57
102 TestFunctional/parallel/SSHCmd 0.79
103 TestFunctional/parallel/CpCmd 2.16
105 TestFunctional/parallel/FileSync 0.36
106 TestFunctional/parallel/CertSync 2.25
110 TestFunctional/parallel/NodeLabels 0.12
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.76
114 TestFunctional/parallel/License 0.31
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.61
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.31
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.09
121 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
127 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
128 TestFunctional/parallel/ProfileCmd/profile_list 0.4
129 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
130 TestFunctional/parallel/MountCmd/any-port 7.97
131 TestFunctional/parallel/MountCmd/specific-port 2.18
132 TestFunctional/parallel/ServiceCmd/List 0.64
133 TestFunctional/parallel/MountCmd/VerifyCleanup 1.54
134 TestFunctional/parallel/ServiceCmd/JSONOutput 0.91
138 TestFunctional/parallel/Version/short 0.08
139 TestFunctional/parallel/Version/components 1.33
140 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
141 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
142 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
143 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
144 TestFunctional/parallel/ImageCommands/ImageBuild 3.89
145 TestFunctional/parallel/ImageCommands/Setup 0.64
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
153 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
154 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
155 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 204.76
164 TestMultiControlPlane/serial/DeployApp 6.73
165 TestMultiControlPlane/serial/PingHostFromPods 1.77
166 TestMultiControlPlane/serial/AddWorkerNode 60.86
167 TestMultiControlPlane/serial/NodeLabels 0.11
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.05
169 TestMultiControlPlane/serial/CopyFile 19.74
170 TestMultiControlPlane/serial/StopSecondaryNode 12.83
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.79
172 TestMultiControlPlane/serial/RestartSecondaryNode 29.37
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.37
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 127.27
175 TestMultiControlPlane/serial/DeleteSecondaryNode 11.7
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.78
177 TestMultiControlPlane/serial/StopCluster 36.2
178 TestMultiControlPlane/serial/RestartCluster 81.83
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.81
180 TestMultiControlPlane/serial/AddSecondaryNode 83.3
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.1
185 TestJSONOutput/start/Command 80.95
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.86
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.24
210 TestKicCustomNetwork/create_custom_network 37.91
211 TestKicCustomNetwork/use_default_bridge_network 37.37
212 TestKicExistingNetwork 35.43
213 TestKicCustomSubnet 37.13
214 TestKicStaticIP 34.3
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 74.66
219 TestMountStart/serial/StartWithMountFirst 9.05
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 9.34
222 TestMountStart/serial/VerifyMountSecond 0.28
223 TestMountStart/serial/DeleteFirst 1.69
224 TestMountStart/serial/VerifyMountPostDelete 0.25
225 TestMountStart/serial/Stop 1.29
226 TestMountStart/serial/RestartStopped 7.66
227 TestMountStart/serial/VerifyMountPostStop 0.26
230 TestMultiNode/serial/FreshStart2Nodes 140.85
231 TestMultiNode/serial/DeployApp2Nodes 5.22
232 TestMultiNode/serial/PingHostFrom2Pods 0.92
233 TestMultiNode/serial/AddNode 59.55
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.72
236 TestMultiNode/serial/CopyFile 10.81
237 TestMultiNode/serial/StopNode 2.45
238 TestMultiNode/serial/StartAfterStop 8.22
239 TestMultiNode/serial/RestartKeepsNodes 77.47
240 TestMultiNode/serial/DeleteNode 5.66
241 TestMultiNode/serial/StopMultiNode 23.99
242 TestMultiNode/serial/RestartMultiNode 54.92
243 TestMultiNode/serial/ValidateNameConflict 37.58
248 TestPreload 122.21
250 TestScheduledStopUnix 106.85
253 TestInsufficientStorage 14.92
254 TestRunningBinaryUpgrade 55.05
256 TestKubernetesUpgrade 353.84
257 TestMissingContainerUpgrade 118.5
259 TestPause/serial/Start 90.28
261 TestNoKubernetes/serial/StartNoK8sWithVersion 0.13
262 TestNoKubernetes/serial/StartWithK8s 42.93
263 TestNoKubernetes/serial/StartWithStopK8s 38.88
264 TestNoKubernetes/serial/Start 9.02
265 TestPause/serial/SecondStartNoReconfiguration 33.99
266 TestNoKubernetes/serial/VerifyK8sNotRunning 0.34
267 TestNoKubernetes/serial/ProfileList 1.38
268 TestNoKubernetes/serial/Stop 1.37
269 TestNoKubernetes/serial/StartNoArgs 7.83
270 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.46
272 TestStoppedBinaryUpgrade/Setup 0.91
273 TestStoppedBinaryUpgrade/Upgrade 59.3
274 TestStoppedBinaryUpgrade/MinikubeLogs 1.27
289 TestNetworkPlugins/group/false 3.66
294 TestStartStop/group/old-k8s-version/serial/FirstStart 59.06
295 TestStartStop/group/old-k8s-version/serial/DeployApp 9.37
297 TestStartStop/group/old-k8s-version/serial/Stop 12
298 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
299 TestStartStop/group/old-k8s-version/serial/SecondStart 49.76
300 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
301 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.13
302 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.33
305 TestStartStop/group/no-preload/serial/FirstStart 76.23
307 TestStartStop/group/embed-certs/serial/FirstStart 91.63
308 TestStartStop/group/no-preload/serial/DeployApp 8.32
310 TestStartStop/group/no-preload/serial/Stop 12.05
311 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
312 TestStartStop/group/no-preload/serial/SecondStart 52.13
313 TestStartStop/group/embed-certs/serial/DeployApp 9.4
315 TestStartStop/group/embed-certs/serial/Stop 12.29
316 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
317 TestStartStop/group/embed-certs/serial/SecondStart 53.88
318 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
319 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
320 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
323 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 84.91
324 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
325 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.14
326 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.29
329 TestStartStop/group/newest-cni/serial/FirstStart 41.85
330 TestStartStop/group/newest-cni/serial/DeployApp 0
332 TestStartStop/group/newest-cni/serial/Stop 1.37
333 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
334 TestStartStop/group/newest-cni/serial/SecondStart 18.46
335 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.44
337 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
338 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
339 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.46
340 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.81
342 TestNetworkPlugins/group/auto/Start 85.69
343 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.26
344 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 62.54
345 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
346 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.12
347 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
349 TestNetworkPlugins/group/auto/KubeletFlags 0.45
350 TestNetworkPlugins/group/auto/NetCatPod 11.42
351 TestNetworkPlugins/group/kindnet/Start 85.96
352 TestNetworkPlugins/group/auto/DNS 0.16
353 TestNetworkPlugins/group/auto/Localhost 0.22
354 TestNetworkPlugins/group/auto/HairPin 0.19
355 TestNetworkPlugins/group/calico/Start 63.83
356 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.42
358 TestNetworkPlugins/group/kindnet/NetCatPod 10.27
359 TestNetworkPlugins/group/calico/ControllerPod 6.01
360 TestNetworkPlugins/group/calico/KubeletFlags 0.3
361 TestNetworkPlugins/group/calico/NetCatPod 11.34
362 TestNetworkPlugins/group/kindnet/DNS 0.3
363 TestNetworkPlugins/group/kindnet/Localhost 0.17
364 TestNetworkPlugins/group/kindnet/HairPin 0.17
365 TestNetworkPlugins/group/calico/DNS 0.23
366 TestNetworkPlugins/group/calico/Localhost 0.16
367 TestNetworkPlugins/group/calico/HairPin 0.16
368 TestNetworkPlugins/group/custom-flannel/Start 72.21
369 TestNetworkPlugins/group/enable-default-cni/Start 57.26
370 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.32
371 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.29
372 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.39
373 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.38
374 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
375 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
376 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
377 TestNetworkPlugins/group/custom-flannel/DNS 0.15
378 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
379 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
380 TestNetworkPlugins/group/flannel/Start 67.59
381 TestNetworkPlugins/group/bridge/Start 79.61
382 TestNetworkPlugins/group/flannel/ControllerPod 6.01
383 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
384 TestNetworkPlugins/group/flannel/NetCatPod 9.25
385 TestNetworkPlugins/group/flannel/DNS 0.16
386 TestNetworkPlugins/group/flannel/Localhost 0.14
387 TestNetworkPlugins/group/flannel/HairPin 0.14
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
389 TestNetworkPlugins/group/bridge/NetCatPod 10.26
390 TestNetworkPlugins/group/bridge/DNS 0.2
391 TestNetworkPlugins/group/bridge/Localhost 0.18
392 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.28.0/json-events (5.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-395497 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-395497 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.307065827s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1018 08:29:39.581038 1276097 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1018 08:29:39.581118 1276097 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-395497
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-395497: exit status 85 (82.091462ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-395497 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-395497 │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 08:29:34
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 08:29:34.323054 1276102 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:29:34.323235 1276102 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:29:34.323248 1276102 out.go:374] Setting ErrFile to fd 2...
	I1018 08:29:34.323264 1276102 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:29:34.323529 1276102 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	W1018 08:29:34.323698 1276102 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21767-1274243/.minikube/config/config.json: open /home/jenkins/minikube-integration/21767-1274243/.minikube/config/config.json: no such file or directory
	I1018 08:29:34.324164 1276102 out.go:368] Setting JSON to true
	I1018 08:29:34.325052 1276102 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":36722,"bootTime":1760739453,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1018 08:29:34.325116 1276102 start.go:141] virtualization:  
	I1018 08:29:34.329100 1276102 out.go:99] [download-only-395497] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1018 08:29:34.329271 1276102 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball: no such file or directory
	I1018 08:29:34.329325 1276102 notify.go:220] Checking for updates...
	I1018 08:29:34.332274 1276102 out.go:171] MINIKUBE_LOCATION=21767
	I1018 08:29:34.335368 1276102 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 08:29:34.338219 1276102 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 08:29:34.341036 1276102 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-1274243/.minikube
	I1018 08:29:34.344018 1276102 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1018 08:29:34.349574 1276102 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1018 08:29:34.349878 1276102 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 08:29:34.376049 1276102 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 08:29:34.376162 1276102 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 08:29:34.430804 1276102 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-18 08:29:34.421769618 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 08:29:34.430915 1276102 docker.go:318] overlay module found
	I1018 08:29:34.434022 1276102 out.go:99] Using the docker driver based on user configuration
	I1018 08:29:34.434069 1276102 start.go:305] selected driver: docker
	I1018 08:29:34.434081 1276102 start.go:925] validating driver "docker" against <nil>
	I1018 08:29:34.434190 1276102 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 08:29:34.486148 1276102 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-18 08:29:34.477675039 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 08:29:34.486295 1276102 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 08:29:34.486561 1276102 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1018 08:29:34.486711 1276102 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1018 08:29:34.489730 1276102 out.go:171] Using Docker driver with root privileges
	I1018 08:29:34.492671 1276102 cni.go:84] Creating CNI manager for ""
	I1018 08:29:34.492744 1276102 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 08:29:34.492758 1276102 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 08:29:34.492845 1276102 start.go:349] cluster config:
	{Name:download-only-395497 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-395497 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 08:29:34.495777 1276102 out.go:99] Starting "download-only-395497" primary control-plane node in "download-only-395497" cluster
	I1018 08:29:34.495800 1276102 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 08:29:34.498762 1276102 out.go:99] Pulling base image v0.0.48-1760609789-21757 ...
	I1018 08:29:34.498803 1276102 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 08:29:34.498996 1276102 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 08:29:34.515139 1276102 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1018 08:29:34.515335 1276102 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1018 08:29:34.515440 1276102 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1018 08:29:34.566302 1276102 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1018 08:29:34.566328 1276102 cache.go:58] Caching tarball of preloaded images
	I1018 08:29:34.566498 1276102 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 08:29:34.570630 1276102 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1018 08:29:34.570662 1276102 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1018 08:29:34.661440 1276102 preload.go:290] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1018 08:29:34.661574 1276102 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1018 08:29:38.981614 1276102 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1018 08:29:38.981971 1276102 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/download-only-395497/config.json ...
	I1018 08:29:38.982004 1276102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/download-only-395497/config.json: {Name:mkd9d162c4de28fc4cfce20a612ed2847c8c6516 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:29:38.982176 1276102 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 08:29:38.982355 1276102 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-395497 host does not exist
	  To start a cluster, run: "minikube start -p download-only-395497"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-395497
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (5.45s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-387437 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-387437 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.449178757s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (5.45s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1018 08:29:45.467928 1276097 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1018 08:29:45.467969 1276097 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-387437
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-387437: exit status 85 (111.633075ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-395497 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-395497 │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │ 18 Oct 25 08:29 UTC │
	│ delete  │ -p download-only-395497                                                                                                                                                   │ download-only-395497 │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │ 18 Oct 25 08:29 UTC │
	│ start   │ -o=json --download-only -p download-only-387437 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-387437 │ jenkins │ v1.37.0 │ 18 Oct 25 08:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 08:29:40
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 08:29:40.067025 1276298 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:29:40.067257 1276298 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:29:40.067284 1276298 out.go:374] Setting ErrFile to fd 2...
	I1018 08:29:40.067302 1276298 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:29:40.067608 1276298 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 08:29:40.068131 1276298 out.go:368] Setting JSON to true
	I1018 08:29:40.069052 1276298 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":36727,"bootTime":1760739453,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1018 08:29:40.069156 1276298 start.go:141] virtualization:  
	I1018 08:29:40.072612 1276298 out.go:99] [download-only-387437] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 08:29:40.072920 1276298 notify.go:220] Checking for updates...
	I1018 08:29:40.076224 1276298 out.go:171] MINIKUBE_LOCATION=21767
	I1018 08:29:40.079230 1276298 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 08:29:40.082209 1276298 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 08:29:40.085058 1276298 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-1274243/.minikube
	I1018 08:29:40.087953 1276298 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1018 08:29:40.093652 1276298 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1018 08:29:40.093971 1276298 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 08:29:40.128737 1276298 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 08:29:40.128863 1276298 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 08:29:40.187750 1276298 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-18 08:29:40.177075092 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 08:29:40.188087 1276298 docker.go:318] overlay module found
	I1018 08:29:40.191081 1276298 out.go:99] Using the docker driver based on user configuration
	I1018 08:29:40.191134 1276298 start.go:305] selected driver: docker
	I1018 08:29:40.191146 1276298 start.go:925] validating driver "docker" against <nil>
	I1018 08:29:40.191253 1276298 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 08:29:40.247942 1276298 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-18 08:29:40.238938087 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 08:29:40.248110 1276298 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 08:29:40.248402 1276298 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1018 08:29:40.248555 1276298 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1018 08:29:40.251640 1276298 out.go:171] Using Docker driver with root privileges
	I1018 08:29:40.254431 1276298 cni.go:84] Creating CNI manager for ""
	I1018 08:29:40.254501 1276298 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 08:29:40.254515 1276298 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 08:29:40.254593 1276298 start.go:349] cluster config:
	{Name:download-only-387437 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-387437 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 08:29:40.257590 1276298 out.go:99] Starting "download-only-387437" primary control-plane node in "download-only-387437" cluster
	I1018 08:29:40.257615 1276298 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 08:29:40.260389 1276298 out.go:99] Pulling base image v0.0.48-1760609789-21757 ...
	I1018 08:29:40.260428 1276298 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 08:29:40.260596 1276298 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 08:29:40.275944 1276298 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1018 08:29:40.276073 1276298 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1018 08:29:40.276099 1276298 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory, skipping pull
	I1018 08:29:40.276108 1276298 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in cache, skipping pull
	I1018 08:29:40.276115 1276298 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1018 08:29:40.315612 1276298 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 08:29:40.315664 1276298 cache.go:58] Caching tarball of preloaded images
	I1018 08:29:40.315881 1276298 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 08:29:40.318963 1276298 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1018 08:29:40.318985 1276298 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1018 08:29:40.405657 1276298 preload.go:290] Got checksum from GCS API "bc3e4aa50814345ef9ba3452bb5efb9f"
	I1018 08:29:40.405711 1276298 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:bc3e4aa50814345ef9ba3452bb5efb9f -> /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1018 08:29:44.680094 1276298 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 08:29:44.680559 1276298 profile.go:143] Saving config to /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/download-only-387437/config.json ...
	I1018 08:29:44.680595 1276298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/download-only-387437/config.json: {Name:mk6cfa0ce42b941fcc6f7437120ac0f66aa1c9cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:29:44.680781 1276298 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 08:29:44.680946 1276298 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21767-1274243/.minikube/cache/linux/arm64/v1.34.1/kubectl
	
	
	* The control-plane node download-only-387437 host does not exist
	  To start a cluster, run: "minikube start -p download-only-387437"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-387437
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I1018 08:29:46.619829 1276097 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-375009 --alsologtostderr --binary-mirror http://127.0.0.1:42419 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-375009" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-375009
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-718596
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-718596: exit status 85 (72.56229ms)

                                                
                                                
-- stdout --
	* Profile "addons-718596" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-718596"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-718596
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-718596: exit status 85 (80.722607ms)

                                                
                                                
-- stdout --
	* Profile "addons-718596" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-718596"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (175.25s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-718596 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-718596 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m55.243328526s)
--- PASS: TestAddons/Setup (175.25s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-718596 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-718596 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.84s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-718596 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-718596 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [0d911a7b-137f-4786-84c4-787c87e49cd2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [0d911a7b-137f-4786-84c4-787c87e49cd2] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003424816s
addons_test.go:694: (dbg) Run:  kubectl --context addons-718596 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-718596 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-718596 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-718596 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.84s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.41s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-718596
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-718596: (12.117083536s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-718596
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-718596
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-718596
--- PASS: TestAddons/StoppedEnableDisable (12.41s)

                                                
                                    
x
+
TestCertOptions (39.7s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-783705 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-783705 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (36.801430593s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-783705 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-783705 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-783705 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-783705" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-783705
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-783705: (2.173031497s)
--- PASS: TestCertOptions (39.70s)

                                                
                                    
x
+
TestCertExpiration (254.03s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-854768 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-854768 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (45.07048834s)
E1018 09:27:41.682812 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/functional-441731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:27:43.343977 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-854768 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-854768 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (26.303044869s)
helpers_test.go:175: Cleaning up "cert-expiration-854768" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-854768
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-854768: (2.659001499s)
--- PASS: TestCertExpiration (254.03s)

                                                
                                    
x
+
TestForceSystemdFlag (35.71s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-664051 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-664051 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (32.921003688s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-664051 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-664051" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-664051
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-664051: (2.474743663s)
--- PASS: TestForceSystemdFlag (35.71s)

                                                
                                    
x
+
TestForceSystemdEnv (32.58s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-406177 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-406177 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (30.031332067s)
helpers_test.go:175: Cleaning up "force-systemd-env-406177" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-406177
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-406177: (2.551827382s)
--- PASS: TestForceSystemdEnv (32.58s)

                                                
                                    
x
+
TestErrorSpam/setup (32.9s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-994682 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-994682 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-994682 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-994682 --driver=docker  --container-runtime=crio: (32.899276171s)
--- PASS: TestErrorSpam/setup (32.90s)

                                                
                                    
x
+
TestErrorSpam/start (0.75s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-994682 --log_dir /tmp/nospam-994682 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-994682 --log_dir /tmp/nospam-994682 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-994682 --log_dir /tmp/nospam-994682 start --dry-run
--- PASS: TestErrorSpam/start (0.75s)

                                                
                                    
x
+
TestErrorSpam/status (1.07s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-994682 --log_dir /tmp/nospam-994682 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-994682 --log_dir /tmp/nospam-994682 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-994682 --log_dir /tmp/nospam-994682 status
--- PASS: TestErrorSpam/status (1.07s)

                                                
                                    
x
+
TestErrorSpam/pause (6.69s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-994682 --log_dir /tmp/nospam-994682 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-994682 --log_dir /tmp/nospam-994682 pause: exit status 80 (2.304301156s)

                                                
                                                
-- stdout --
	* Pausing node nospam-994682 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:36:40Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-994682 --log_dir /tmp/nospam-994682 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-994682 --log_dir /tmp/nospam-994682 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-994682 --log_dir /tmp/nospam-994682 pause: exit status 80 (2.120061465s)

                                                
                                                
-- stdout --
	* Pausing node nospam-994682 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:36:42Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-994682 --log_dir /tmp/nospam-994682 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-994682 --log_dir /tmp/nospam-994682 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-994682 --log_dir /tmp/nospam-994682 pause: exit status 80 (2.259453974s)

                                                
                                                
-- stdout --
	* Pausing node nospam-994682 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:36:45Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-994682 --log_dir /tmp/nospam-994682 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.69s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.45s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-994682 --log_dir /tmp/nospam-994682 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-994682 --log_dir /tmp/nospam-994682 unpause: exit status 80 (1.634946252s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-994682 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:36:46Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-994682 --log_dir /tmp/nospam-994682 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-994682 --log_dir /tmp/nospam-994682 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-994682 --log_dir /tmp/nospam-994682 unpause: exit status 80 (1.936604365s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-994682 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:36:48Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-994682 --log_dir /tmp/nospam-994682 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-994682 --log_dir /tmp/nospam-994682 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-994682 --log_dir /tmp/nospam-994682 unpause: exit status 80 (1.876127853s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-994682 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T08:36:50Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-994682 --log_dir /tmp/nospam-994682 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.45s)

                                                
                                    
x
+
TestErrorSpam/stop (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-994682 --log_dir /tmp/nospam-994682 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-994682 --log_dir /tmp/nospam-994682 stop: (1.299914464s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-994682 --log_dir /tmp/nospam-994682 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-994682 --log_dir /tmp/nospam-994682 stop
--- PASS: TestErrorSpam/stop (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21767-1274243/.minikube/files/etc/test/nested/copy/1276097/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (76.74s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-441731 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1018 08:37:43.353094 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:37:43.359558 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:37:43.370912 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:37:43.392333 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:37:43.433762 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:37:43.515258 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:37:43.676740 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:37:43.998467 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:37:44.640100 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:37:45.922436 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:37:48.483724 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:37:53.605868 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:38:03.847254 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-441731 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m16.740251786s)
--- PASS: TestFunctional/serial/StartWithProxy (76.74s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (27.96s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1018 08:38:12.702534 1276097 config.go:182] Loaded profile config "functional-441731": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-441731 --alsologtostderr -v=8
E1018 08:38:24.329011 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-441731 --alsologtostderr -v=8: (27.953360497s)
functional_test.go:678: soft start took 27.958640068s for "functional-441731" cluster.
I1018 08:38:40.656208 1276097 config.go:182] Loaded profile config "functional-441731": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (27.96s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-441731 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.51s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-441731 cache add registry.k8s.io/pause:3.1: (1.18945444s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-441731 cache add registry.k8s.io/pause:3.3: (1.219356555s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-441731 cache add registry.k8s.io/pause:latest: (1.098109916s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.51s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-441731 /tmp/TestFunctionalserialCacheCmdcacheadd_local874357979/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 cache add minikube-local-cache-test:functional-441731
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 cache delete minikube-local-cache-test:functional-441731
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-441731
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.84s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-441731 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (286.25675ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.84s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 kubectl -- --context functional-441731 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-441731 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.19s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-441731 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1018 08:39:05.291282 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-441731 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.194496065s)
functional_test.go:776: restart took 42.194588255s for "functional-441731" cluster.
I1018 08:39:30.368840 1276097 config.go:182] Loaded profile config "functional-441731": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (42.19s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-441731 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-441731 logs: (1.554443211s)
--- PASS: TestFunctional/serial/LogsCmd (1.55s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.58s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 logs --file /tmp/TestFunctionalserialLogsFileCmd2202488403/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-441731 logs --file /tmp/TestFunctionalserialLogsFileCmd2202488403/001/logs.txt: (1.577634618s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.58s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.22s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-441731 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-441731
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-441731: exit status 115 (390.610249ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32638 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-441731 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.22s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-441731 config get cpus: exit status 14 (64.867947ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-441731 config get cpus: exit status 14 (94.611895ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-441731 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-441731 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 1302779: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.02s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-441731 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-441731 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (237.807142ms)

                                                
                                                
-- stdout --
	* [functional-441731] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21767
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21767-1274243/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-1274243/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 08:50:06.797584 1302167 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:50:06.797870 1302167 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:50:06.797897 1302167 out.go:374] Setting ErrFile to fd 2...
	I1018 08:50:06.797917 1302167 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:50:06.798253 1302167 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 08:50:06.798657 1302167 out.go:368] Setting JSON to false
	I1018 08:50:06.799615 1302167 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":37954,"bootTime":1760739453,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1018 08:50:06.799705 1302167 start.go:141] virtualization:  
	I1018 08:50:06.802919 1302167 out.go:179] * [functional-441731] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 08:50:06.806719 1302167 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 08:50:06.806801 1302167 notify.go:220] Checking for updates...
	I1018 08:50:06.814922 1302167 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 08:50:06.818241 1302167 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 08:50:06.821637 1302167 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-1274243/.minikube
	I1018 08:50:06.824790 1302167 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 08:50:06.827527 1302167 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 08:50:06.833361 1302167 config.go:182] Loaded profile config "functional-441731": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:50:06.834025 1302167 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 08:50:06.865206 1302167 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 08:50:06.865319 1302167 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 08:50:06.963239 1302167 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-18 08:50:06.953908495 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 08:50:06.963343 1302167 docker.go:318] overlay module found
	I1018 08:50:06.966336 1302167 out.go:179] * Using the docker driver based on existing profile
	I1018 08:50:06.969017 1302167 start.go:305] selected driver: docker
	I1018 08:50:06.969036 1302167 start.go:925] validating driver "docker" against &{Name:functional-441731 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-441731 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 08:50:06.969148 1302167 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 08:50:06.972763 1302167 out.go:203] 
	W1018 08:50:06.975630 1302167 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1018 08:50:06.979424 1302167 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-441731 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-441731 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-441731 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (276.883756ms)

                                                
                                                
-- stdout --
	* [functional-441731] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21767
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21767-1274243/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-1274243/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 08:50:06.553681 1302081 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:50:06.553814 1302081 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:50:06.553824 1302081 out.go:374] Setting ErrFile to fd 2...
	I1018 08:50:06.553829 1302081 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:50:06.555518 1302081 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 08:50:06.556013 1302081 out.go:368] Setting JSON to false
	I1018 08:50:06.556833 1302081 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":37954,"bootTime":1760739453,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1018 08:50:06.556906 1302081 start.go:141] virtualization:  
	I1018 08:50:06.560187 1302081 out.go:179] * [functional-441731] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1018 08:50:06.563974 1302081 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 08:50:06.564128 1302081 notify.go:220] Checking for updates...
	I1018 08:50:06.569843 1302081 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 08:50:06.572707 1302081 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 08:50:06.575739 1302081 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-1274243/.minikube
	I1018 08:50:06.578499 1302081 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 08:50:06.581304 1302081 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 08:50:06.584478 1302081 config.go:182] Loaded profile config "functional-441731": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:50:06.585033 1302081 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 08:50:06.627188 1302081 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 08:50:06.627305 1302081 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 08:50:06.723018 1302081 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 08:50:06.707165307 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 08:50:06.723116 1302081 docker.go:318] overlay module found
	I1018 08:50:06.726064 1302081 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1018 08:50:06.728948 1302081 start.go:305] selected driver: docker
	I1018 08:50:06.728962 1302081 start.go:925] validating driver "docker" against &{Name:functional-441731 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-441731 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 08:50:06.729052 1302081 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 08:50:06.732518 1302081 out.go:203] 
	W1018 08:50:06.735202 1302081 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1018 08:50:06.737864 1302081 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (23.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [8d98d636-38fa-4717-9ac3-f3698ce74d31] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003031604s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-441731 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-441731 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-441731 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-441731 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [59d76522-7e46-45f7-b205-3421609006ce] Pending
helpers_test.go:352: "sp-pod" [59d76522-7e46-45f7-b205-3421609006ce] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [59d76522-7e46-45f7-b205-3421609006ce] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.00343422s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-441731 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-441731 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-441731 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [d6dd7b61-8c8c-4578-a06b-52b008e7f7d1] Pending
helpers_test.go:352: "sp-pod" [d6dd7b61-8c8c-4578-a06b-52b008e7f7d1] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003770775s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-441731 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (23.57s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 ssh -n functional-441731 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 cp functional-441731:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3841291346/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 ssh -n functional-441731 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 ssh -n functional-441731 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/1276097/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 ssh "sudo cat /etc/test/nested/copy/1276097/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/1276097.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 ssh "sudo cat /etc/ssl/certs/1276097.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/1276097.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 ssh "sudo cat /usr/share/ca-certificates/1276097.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/12760972.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 ssh "sudo cat /etc/ssl/certs/12760972.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/12760972.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 ssh "sudo cat /usr/share/ca-certificates/12760972.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-441731 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-441731 ssh "sudo systemctl is-active docker": exit status 1 (380.259402ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-441731 ssh "sudo systemctl is-active containerd": exit status 1 (379.278826ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-441731 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-441731 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-441731 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 1298298: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-441731 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-441731 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-441731 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [a0fba874-cb59-4652-befd-f3716044ad4b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [a0fba874-cb59-4652-befd-f3716044ad4b] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003900761s
I1018 08:39:48.216237 1276097 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.31s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-441731 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.204.138 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-441731 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "348.157892ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "55.52263ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "378.535869ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "55.984336ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-441731 /tmp/TestFunctionalparallelMountCmdany-port629843887/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760777393453901704" to /tmp/TestFunctionalparallelMountCmdany-port629843887/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760777393453901704" to /tmp/TestFunctionalparallelMountCmdany-port629843887/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760777393453901704" to /tmp/TestFunctionalparallelMountCmdany-port629843887/001/test-1760777393453901704
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-441731 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (361.992198ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1018 08:49:53.817991 1276097 retry.go:31] will retry after 339.960216ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 18 08:49 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 18 08:49 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 18 08:49 test-1760777393453901704
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 ssh cat /mount-9p/test-1760777393453901704
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-441731 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [8101b2a2-cf22-4f56-b418-6c86f36bd723] Pending
helpers_test.go:352: "busybox-mount" [8101b2a2-cf22-4f56-b418-6c86f36bd723] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [8101b2a2-cf22-4f56-b418-6c86f36bd723] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [8101b2a2-cf22-4f56-b418-6c86f36bd723] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.022622571s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-441731 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-441731 /tmp/TestFunctionalparallelMountCmdany-port629843887/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.97s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-441731 /tmp/TestFunctionalparallelMountCmdspecific-port788136510/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-441731 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (356.739715ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1018 08:50:01.776767 1276097 retry.go:31] will retry after 554.63875ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-441731 /tmp/TestFunctionalparallelMountCmdspecific-port788136510/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-441731 ssh "sudo umount -f /mount-9p": exit status 1 (369.768652ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-441731 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-441731 /tmp/TestFunctionalparallelMountCmdspecific-port788136510/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-441731 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3584974872/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-441731 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3584974872/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-441731 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3584974872/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-441731 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-441731 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3584974872/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-441731 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3584974872/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-441731 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3584974872/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 service list -o json
functional_test.go:1504: Took "910.431996ms" to run "out/minikube-linux-arm64 -p functional-441731 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-441731 version -o=json --components: (1.327993312s)
--- PASS: TestFunctional/parallel/Version/components (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-441731 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-441731 image ls --format short --alsologtostderr:
I1018 08:50:21.430832 1304579 out.go:360] Setting OutFile to fd 1 ...
I1018 08:50:21.430939 1304579 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 08:50:21.430947 1304579 out.go:374] Setting ErrFile to fd 2...
I1018 08:50:21.430952 1304579 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 08:50:21.431294 1304579 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
I1018 08:50:21.432265 1304579 config.go:182] Loaded profile config "functional-441731": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 08:50:21.432408 1304579 config.go:182] Loaded profile config "functional-441731": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 08:50:21.432895 1304579 cli_runner.go:164] Run: docker container inspect functional-441731 --format={{.State.Status}}
I1018 08:50:21.451420 1304579 ssh_runner.go:195] Run: systemctl --version
I1018 08:50:21.451481 1304579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-441731
I1018 08:50:21.472456 1304579 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34601 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/functional-441731/id_rsa Username:docker}
I1018 08:50:21.586393 1304579 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-441731 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ docker.io/library/nginx                 │ alpine             │ 9c92f55c0336c │ 54.7MB │
│ docker.io/library/nginx                 │ latest             │ e35ad067421cc │ 184MB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-441731 image ls --format table --alsologtostderr:
I1018 08:50:22.425342 1304833 out.go:360] Setting OutFile to fd 1 ...
I1018 08:50:22.425591 1304833 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 08:50:22.425618 1304833 out.go:374] Setting ErrFile to fd 2...
I1018 08:50:22.425636 1304833 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 08:50:22.425932 1304833 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
I1018 08:50:22.426586 1304833 config.go:182] Loaded profile config "functional-441731": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 08:50:22.426764 1304833 config.go:182] Loaded profile config "functional-441731": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 08:50:22.427262 1304833 cli_runner.go:164] Run: docker container inspect functional-441731 --format={{.State.Status}}
I1018 08:50:22.451140 1304833 ssh_runner.go:195] Run: systemctl --version
I1018 08:50:22.451189 1304833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-441731
I1018 08:50:22.478318 1304833 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34601 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/functional-441731/id_rsa Username:docker}
I1018 08:50:22.596387 1304833 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-441731 image ls --format json --alsologtostderr:
[{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17
ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb35
00","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"9c92f55c0336c2597a5b458ba84a3fd242b209d8b5079443646a0d269df0d4aa","repoDigests":["docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0","docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54704654"},{"
id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repo
Tags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"},{"id":"e35ad067421ccda484ee30e4ccc8a38fa13f9a21dd8d356e495c2d3a1f0766e9","repoDigests":["docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6","docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a"],"repoTags":["docker.io/library/nginx:latest"],"size":"184136558"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7
066c95512bd4a8365cb6df23eaf60e70209fe79645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-441731 image ls --format json --alsologtostderr:
I1018 08:50:22.173500 1304763 out.go:360] Setting OutFile to fd 1 ...
I1018 08:50:22.173656 1304763 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 08:50:22.173679 1304763 out.go:374] Setting ErrFile to fd 2...
I1018 08:50:22.173701 1304763 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 08:50:22.173958 1304763 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
I1018 08:50:22.174577 1304763 config.go:182] Loaded profile config "functional-441731": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 08:50:22.174698 1304763 config.go:182] Loaded profile config "functional-441731": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 08:50:22.175136 1304763 cli_runner.go:164] Run: docker container inspect functional-441731 --format={{.State.Status}}
I1018 08:50:22.198382 1304763 ssh_runner.go:195] Run: systemctl --version
I1018 08:50:22.198452 1304763 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-441731
I1018 08:50:22.217633 1304763 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34601 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/functional-441731/id_rsa Username:docker}
I1018 08:50:22.331304 1304763 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-441731 image ls --format yaml --alsologtostderr:
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 9c92f55c0336c2597a5b458ba84a3fd242b209d8b5079443646a0d269df0d4aa
repoDigests:
- docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0
- docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22
repoTags:
- docker.io/library/nginx:alpine
size: "54704654"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: e35ad067421ccda484ee30e4ccc8a38fa13f9a21dd8d356e495c2d3a1f0766e9
repoDigests:
- docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6
- docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a
repoTags:
- docker.io/library/nginx:latest
size: "184136558"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-441731 image ls --format yaml --alsologtostderr:
I1018 08:50:21.663603 1304633 out.go:360] Setting OutFile to fd 1 ...
I1018 08:50:21.663718 1304633 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 08:50:21.663729 1304633 out.go:374] Setting ErrFile to fd 2...
I1018 08:50:21.663734 1304633 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 08:50:21.664020 1304633 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
I1018 08:50:21.664618 1304633 config.go:182] Loaded profile config "functional-441731": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 08:50:21.664735 1304633 config.go:182] Loaded profile config "functional-441731": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 08:50:21.665166 1304633 cli_runner.go:164] Run: docker container inspect functional-441731 --format={{.State.Status}}
I1018 08:50:21.682289 1304633 ssh_runner.go:195] Run: systemctl --version
I1018 08:50:21.682347 1304633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-441731
I1018 08:50:21.701244 1304633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34601 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/functional-441731/id_rsa Username:docker}
I1018 08:50:21.810055 1304633 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-441731 ssh pgrep buildkitd: exit status 1 (341.688314ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 image build -t localhost/my-image:functional-441731 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-441731 image build -t localhost/my-image:functional-441731 testdata/build --alsologtostderr: (3.313316106s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-441731 image build -t localhost/my-image:functional-441731 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 842f70c356e
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-441731
--> de44ec30fc9
Successfully tagged localhost/my-image:functional-441731
de44ec30fc940c564e4910e786ed8f78783d2cf42c27271fb6bf0a38595b007f
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-441731 image build -t localhost/my-image:functional-441731 testdata/build --alsologtostderr:
I1018 08:50:22.271039 1304782 out.go:360] Setting OutFile to fd 1 ...
I1018 08:50:22.272292 1304782 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 08:50:22.272307 1304782 out.go:374] Setting ErrFile to fd 2...
I1018 08:50:22.272313 1304782 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 08:50:22.272576 1304782 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
I1018 08:50:22.273184 1304782 config.go:182] Loaded profile config "functional-441731": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 08:50:22.273858 1304782 config.go:182] Loaded profile config "functional-441731": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 08:50:22.274319 1304782 cli_runner.go:164] Run: docker container inspect functional-441731 --format={{.State.Status}}
I1018 08:50:22.291779 1304782 ssh_runner.go:195] Run: systemctl --version
I1018 08:50:22.291868 1304782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-441731
I1018 08:50:22.316548 1304782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34601 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/functional-441731/id_rsa Username:docker}
I1018 08:50:22.431103 1304782 build_images.go:161] Building image from path: /tmp/build.1533841433.tar
I1018 08:50:22.431247 1304782 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1018 08:50:22.442236 1304782 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1533841433.tar
I1018 08:50:22.447069 1304782 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1533841433.tar: stat -c "%s %y" /var/lib/minikube/build/build.1533841433.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1533841433.tar': No such file or directory
I1018 08:50:22.447106 1304782 ssh_runner.go:362] scp /tmp/build.1533841433.tar --> /var/lib/minikube/build/build.1533841433.tar (3072 bytes)
I1018 08:50:22.471464 1304782 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1533841433
I1018 08:50:22.480761 1304782 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1533841433 -xf /var/lib/minikube/build/build.1533841433.tar
I1018 08:50:22.489579 1304782 crio.go:315] Building image: /var/lib/minikube/build/build.1533841433
I1018 08:50:22.489647 1304782 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-441731 /var/lib/minikube/build/build.1533841433 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1018 08:50:25.476876 1304782 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-441731 /var/lib/minikube/build/build.1533841433 --cgroup-manager=cgroupfs: (2.987200912s)
I1018 08:50:25.476950 1304782 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1533841433
I1018 08:50:25.485934 1304782 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1533841433.tar
I1018 08:50:25.494117 1304782 build_images.go:217] Built localhost/my-image:functional-441731 from /tmp/build.1533841433.tar
I1018 08:50:25.494145 1304782 build_images.go:133] succeeded building to: functional-441731
I1018 08:50:25.494150 1304782 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-441731
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 image rm kicbase/echo-server:functional-441731 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 image ls
2025/10/18 08:50:18 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-441731 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-441731
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-441731
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-441731
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (204.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1018 08:52:43.344253 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-455843 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m23.858096204s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (204.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-455843 kubectl -- rollout status deployment/busybox: (3.779531475s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 kubectl -- exec busybox-7b57f96db7-2b5d9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 kubectl -- exec busybox-7b57f96db7-4krxs -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 kubectl -- exec busybox-7b57f96db7-z56ql -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 kubectl -- exec busybox-7b57f96db7-2b5d9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 kubectl -- exec busybox-7b57f96db7-4krxs -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 kubectl -- exec busybox-7b57f96db7-z56ql -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 kubectl -- exec busybox-7b57f96db7-2b5d9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 kubectl -- exec busybox-7b57f96db7-4krxs -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 kubectl -- exec busybox-7b57f96db7-z56ql -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 kubectl -- exec busybox-7b57f96db7-2b5d9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 kubectl -- exec busybox-7b57f96db7-2b5d9 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 kubectl -- exec busybox-7b57f96db7-4krxs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 kubectl -- exec busybox-7b57f96db7-4krxs -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 kubectl -- exec busybox-7b57f96db7-z56ql -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 kubectl -- exec busybox-7b57f96db7-z56ql -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (60.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 node add --alsologtostderr -v 5
E1018 08:54:06.417812 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:54:38.614854 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/functional-441731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:54:38.621370 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/functional-441731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:54:38.633139 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/functional-441731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:54:38.654543 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/functional-441731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:54:38.696010 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/functional-441731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:54:38.777451 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/functional-441731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:54:38.938922 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/functional-441731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:54:39.260741 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/functional-441731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:54:39.902636 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/functional-441731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:54:41.184012 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/functional-441731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:54:43.746139 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/functional-441731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:54:48.868249 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/functional-441731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:54:59.110427 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/functional-441731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-455843 node add --alsologtostderr -v 5: (59.75807502s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-455843 status --alsologtostderr -v 5: (1.100147634s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (60.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-455843 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.048814089s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-455843 status --output json --alsologtostderr -v 5: (1.034754996s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 cp testdata/cp-test.txt ha-455843:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 ssh -n ha-455843 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 cp ha-455843:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3613690069/001/cp-test_ha-455843.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 ssh -n ha-455843 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 cp ha-455843:/home/docker/cp-test.txt ha-455843-m02:/home/docker/cp-test_ha-455843_ha-455843-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 ssh -n ha-455843 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 ssh -n ha-455843-m02 "sudo cat /home/docker/cp-test_ha-455843_ha-455843-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 cp ha-455843:/home/docker/cp-test.txt ha-455843-m03:/home/docker/cp-test_ha-455843_ha-455843-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 ssh -n ha-455843 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 ssh -n ha-455843-m03 "sudo cat /home/docker/cp-test_ha-455843_ha-455843-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 cp ha-455843:/home/docker/cp-test.txt ha-455843-m04:/home/docker/cp-test_ha-455843_ha-455843-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 ssh -n ha-455843 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 ssh -n ha-455843-m04 "sudo cat /home/docker/cp-test_ha-455843_ha-455843-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 cp testdata/cp-test.txt ha-455843-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 ssh -n ha-455843-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 cp ha-455843-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3613690069/001/cp-test_ha-455843-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 ssh -n ha-455843-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 cp ha-455843-m02:/home/docker/cp-test.txt ha-455843:/home/docker/cp-test_ha-455843-m02_ha-455843.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 ssh -n ha-455843-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 ssh -n ha-455843 "sudo cat /home/docker/cp-test_ha-455843-m02_ha-455843.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 cp ha-455843-m02:/home/docker/cp-test.txt ha-455843-m03:/home/docker/cp-test_ha-455843-m02_ha-455843-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 ssh -n ha-455843-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 ssh -n ha-455843-m03 "sudo cat /home/docker/cp-test_ha-455843-m02_ha-455843-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 cp ha-455843-m02:/home/docker/cp-test.txt ha-455843-m04:/home/docker/cp-test_ha-455843-m02_ha-455843-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 ssh -n ha-455843-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 ssh -n ha-455843-m04 "sudo cat /home/docker/cp-test_ha-455843-m02_ha-455843-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 cp testdata/cp-test.txt ha-455843-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 ssh -n ha-455843-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 cp ha-455843-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3613690069/001/cp-test_ha-455843-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 ssh -n ha-455843-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 cp ha-455843-m03:/home/docker/cp-test.txt ha-455843:/home/docker/cp-test_ha-455843-m03_ha-455843.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 ssh -n ha-455843-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 ssh -n ha-455843 "sudo cat /home/docker/cp-test_ha-455843-m03_ha-455843.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 cp ha-455843-m03:/home/docker/cp-test.txt ha-455843-m02:/home/docker/cp-test_ha-455843-m03_ha-455843-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 ssh -n ha-455843-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 ssh -n ha-455843-m02 "sudo cat /home/docker/cp-test_ha-455843-m03_ha-455843-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 cp ha-455843-m03:/home/docker/cp-test.txt ha-455843-m04:/home/docker/cp-test_ha-455843-m03_ha-455843-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 ssh -n ha-455843-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 ssh -n ha-455843-m04 "sudo cat /home/docker/cp-test_ha-455843-m03_ha-455843-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 cp testdata/cp-test.txt ha-455843-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 ssh -n ha-455843-m04 "sudo cat /home/docker/cp-test.txt"
E1018 08:55:19.591982 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/functional-441731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 cp ha-455843-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3613690069/001/cp-test_ha-455843-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 ssh -n ha-455843-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 cp ha-455843-m04:/home/docker/cp-test.txt ha-455843:/home/docker/cp-test_ha-455843-m04_ha-455843.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 ssh -n ha-455843-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 ssh -n ha-455843 "sudo cat /home/docker/cp-test_ha-455843-m04_ha-455843.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 cp ha-455843-m04:/home/docker/cp-test.txt ha-455843-m02:/home/docker/cp-test_ha-455843-m04_ha-455843-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 ssh -n ha-455843-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 ssh -n ha-455843-m02 "sudo cat /home/docker/cp-test_ha-455843-m04_ha-455843-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 cp ha-455843-m04:/home/docker/cp-test.txt ha-455843-m03:/home/docker/cp-test_ha-455843-m04_ha-455843-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 ssh -n ha-455843-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 ssh -n ha-455843-m03 "sudo cat /home/docker/cp-test_ha-455843-m04_ha-455843-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-455843 node stop m02 --alsologtostderr -v 5: (12.05777237s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-455843 status --alsologtostderr -v 5: exit status 7 (771.83179ms)

                                                
                                                
-- stdout --
	ha-455843
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-455843-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-455843-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-455843-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 08:55:35.725279 1319734 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:55:35.725427 1319734 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:55:35.725438 1319734 out.go:374] Setting ErrFile to fd 2...
	I1018 08:55:35.725443 1319734 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:55:35.725816 1319734 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 08:55:35.726418 1319734 out.go:368] Setting JSON to false
	I1018 08:55:35.726461 1319734 mustload.go:65] Loading cluster: ha-455843
	I1018 08:55:35.727334 1319734 config.go:182] Loaded profile config "ha-455843": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:55:35.727356 1319734 status.go:174] checking status of ha-455843 ...
	I1018 08:55:35.729158 1319734 cli_runner.go:164] Run: docker container inspect ha-455843 --format={{.State.Status}}
	I1018 08:55:35.729580 1319734 notify.go:220] Checking for updates...
	I1018 08:55:35.753630 1319734 status.go:371] ha-455843 host status = "Running" (err=<nil>)
	I1018 08:55:35.753654 1319734 host.go:66] Checking if "ha-455843" exists ...
	I1018 08:55:35.753960 1319734 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-455843
	I1018 08:55:35.781875 1319734 host.go:66] Checking if "ha-455843" exists ...
	I1018 08:55:35.782205 1319734 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 08:55:35.782246 1319734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-455843
	I1018 08:55:35.801873 1319734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34606 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/ha-455843/id_rsa Username:docker}
	I1018 08:55:35.905074 1319734 ssh_runner.go:195] Run: systemctl --version
	I1018 08:55:35.911431 1319734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 08:55:35.924801 1319734 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 08:55:36.003491 1319734 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-10-18 08:55:35.993586876 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 08:55:36.004570 1319734 kubeconfig.go:125] found "ha-455843" server: "https://192.168.49.254:8443"
	I1018 08:55:36.004617 1319734 api_server.go:166] Checking apiserver status ...
	I1018 08:55:36.004689 1319734 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 08:55:36.018942 1319734 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1260/cgroup
	I1018 08:55:36.027952 1319734 api_server.go:182] apiserver freezer: "9:freezer:/docker/3f14183d1a6a24fa7220b6473a8ca34bcc01f7e6e5b5225549070dbc2d270ccb/crio/crio-9ca01491b4f7817a415310e2999224b0cc991893dea5b467c01800f89008313e"
	I1018 08:55:36.028033 1319734 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/3f14183d1a6a24fa7220b6473a8ca34bcc01f7e6e5b5225549070dbc2d270ccb/crio/crio-9ca01491b4f7817a415310e2999224b0cc991893dea5b467c01800f89008313e/freezer.state
	I1018 08:55:36.036707 1319734 api_server.go:204] freezer state: "THAWED"
	I1018 08:55:36.036777 1319734 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1018 08:55:36.045317 1319734 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1018 08:55:36.045348 1319734 status.go:463] ha-455843 apiserver status = Running (err=<nil>)
	I1018 08:55:36.045360 1319734 status.go:176] ha-455843 status: &{Name:ha-455843 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 08:55:36.045408 1319734 status.go:174] checking status of ha-455843-m02 ...
	I1018 08:55:36.045733 1319734 cli_runner.go:164] Run: docker container inspect ha-455843-m02 --format={{.State.Status}}
	I1018 08:55:36.062589 1319734 status.go:371] ha-455843-m02 host status = "Stopped" (err=<nil>)
	I1018 08:55:36.062616 1319734 status.go:384] host is not running, skipping remaining checks
	I1018 08:55:36.062622 1319734 status.go:176] ha-455843-m02 status: &{Name:ha-455843-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 08:55:36.062649 1319734 status.go:174] checking status of ha-455843-m03 ...
	I1018 08:55:36.062975 1319734 cli_runner.go:164] Run: docker container inspect ha-455843-m03 --format={{.State.Status}}
	I1018 08:55:36.080703 1319734 status.go:371] ha-455843-m03 host status = "Running" (err=<nil>)
	I1018 08:55:36.080726 1319734 host.go:66] Checking if "ha-455843-m03" exists ...
	I1018 08:55:36.081027 1319734 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-455843-m03
	I1018 08:55:36.098311 1319734 host.go:66] Checking if "ha-455843-m03" exists ...
	I1018 08:55:36.098761 1319734 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 08:55:36.098824 1319734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-455843-m03
	I1018 08:55:36.117770 1319734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34616 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/ha-455843-m03/id_rsa Username:docker}
	I1018 08:55:36.217392 1319734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 08:55:36.233689 1319734 kubeconfig.go:125] found "ha-455843" server: "https://192.168.49.254:8443"
	I1018 08:55:36.233725 1319734 api_server.go:166] Checking apiserver status ...
	I1018 08:55:36.233784 1319734 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 08:55:36.246819 1319734 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup
	I1018 08:55:36.255623 1319734 api_server.go:182] apiserver freezer: "9:freezer:/docker/bbc303109365d0fa279dc8eadb9191f173e1d13d74b112dab0bbc010e17b45ae/crio/crio-e63d9d2d7e6475b8021a72cc0927ae401bb16c15ed4414b6a96a1f397cf2aaca"
	I1018 08:55:36.255704 1319734 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/bbc303109365d0fa279dc8eadb9191f173e1d13d74b112dab0bbc010e17b45ae/crio/crio-e63d9d2d7e6475b8021a72cc0927ae401bb16c15ed4414b6a96a1f397cf2aaca/freezer.state
	I1018 08:55:36.264713 1319734 api_server.go:204] freezer state: "THAWED"
	I1018 08:55:36.264741 1319734 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1018 08:55:36.272861 1319734 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1018 08:55:36.272890 1319734 status.go:463] ha-455843-m03 apiserver status = Running (err=<nil>)
	I1018 08:55:36.272900 1319734 status.go:176] ha-455843-m03 status: &{Name:ha-455843-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 08:55:36.272918 1319734 status.go:174] checking status of ha-455843-m04 ...
	I1018 08:55:36.273224 1319734 cli_runner.go:164] Run: docker container inspect ha-455843-m04 --format={{.State.Status}}
	I1018 08:55:36.291754 1319734 status.go:371] ha-455843-m04 host status = "Running" (err=<nil>)
	I1018 08:55:36.291781 1319734 host.go:66] Checking if "ha-455843-m04" exists ...
	I1018 08:55:36.292148 1319734 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-455843-m04
	I1018 08:55:36.310419 1319734 host.go:66] Checking if "ha-455843-m04" exists ...
	I1018 08:55:36.310727 1319734 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 08:55:36.310785 1319734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-455843-m04
	I1018 08:55:36.332256 1319734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34621 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/ha-455843-m04/id_rsa Username:docker}
	I1018 08:55:36.432853 1319734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 08:55:36.446095 1319734 status.go:176] ha-455843-m04 status: &{Name:ha-455843-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (29.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 node start m02 --alsologtostderr -v 5
E1018 08:56:00.553515 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/functional-441731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-455843 node start m02 --alsologtostderr -v 5: (27.898248274s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-455843 status --alsologtostderr -v 5: (1.332330199s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (29.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.37089811s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (127.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-455843 stop --alsologtostderr -v 5: (27.666748353s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 start --wait true --alsologtostderr -v 5
E1018 08:57:22.476671 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/functional-441731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 08:57:43.344231 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-455843 start --wait true --alsologtostderr -v 5: (1m39.447612693s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (127.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-455843 node delete m03 --alsologtostderr -v 5: (10.731327215s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-455843 stop --alsologtostderr -v 5: (36.088829662s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-455843 status --alsologtostderr -v 5: exit status 7 (107.640966ms)

                                                
                                                
-- stdout --
	ha-455843
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-455843-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-455843-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 08:59:03.862825 1331603 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:59:03.862951 1331603 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:59:03.862962 1331603 out.go:374] Setting ErrFile to fd 2...
	I1018 08:59:03.862967 1331603 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:59:03.863241 1331603 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 08:59:03.863435 1331603 out.go:368] Setting JSON to false
	I1018 08:59:03.863481 1331603 mustload.go:65] Loading cluster: ha-455843
	I1018 08:59:03.863551 1331603 notify.go:220] Checking for updates...
	I1018 08:59:03.864775 1331603 config.go:182] Loaded profile config "ha-455843": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:59:03.864804 1331603 status.go:174] checking status of ha-455843 ...
	I1018 08:59:03.865470 1331603 cli_runner.go:164] Run: docker container inspect ha-455843 --format={{.State.Status}}
	I1018 08:59:03.882437 1331603 status.go:371] ha-455843 host status = "Stopped" (err=<nil>)
	I1018 08:59:03.882459 1331603 status.go:384] host is not running, skipping remaining checks
	I1018 08:59:03.882465 1331603 status.go:176] ha-455843 status: &{Name:ha-455843 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 08:59:03.882495 1331603 status.go:174] checking status of ha-455843-m02 ...
	I1018 08:59:03.882806 1331603 cli_runner.go:164] Run: docker container inspect ha-455843-m02 --format={{.State.Status}}
	I1018 08:59:03.901390 1331603 status.go:371] ha-455843-m02 host status = "Stopped" (err=<nil>)
	I1018 08:59:03.901414 1331603 status.go:384] host is not running, skipping remaining checks
	I1018 08:59:03.901426 1331603 status.go:176] ha-455843-m02 status: &{Name:ha-455843-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 08:59:03.901444 1331603 status.go:174] checking status of ha-455843-m04 ...
	I1018 08:59:03.901722 1331603 cli_runner.go:164] Run: docker container inspect ha-455843-m04 --format={{.State.Status}}
	I1018 08:59:03.923199 1331603 status.go:371] ha-455843-m04 host status = "Stopped" (err=<nil>)
	I1018 08:59:03.923222 1331603 status.go:384] host is not running, skipping remaining checks
	I1018 08:59:03.923229 1331603 status.go:176] ha-455843-m04 status: &{Name:ha-455843-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (81.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1018 08:59:38.614706 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/functional-441731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:00:06.318826 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/functional-441731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-455843 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m20.821984732s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (81.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (83.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-455843 node add --control-plane --alsologtostderr -v 5: (1m22.258450001s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-455843 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-455843 status --alsologtostderr -v 5: (1.040773128s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (83.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.100774518s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.10s)

                                                
                                    
x
+
TestJSONOutput/start/Command (80.95s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-274041 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1018 09:02:43.344602 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-274041 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m20.939223763s)
--- PASS: TestJSONOutput/start/Command (80.95s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.86s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-274041 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-274041 --output=json --user=testUser: (5.857514761s)
--- PASS: TestJSONOutput/stop/Command (5.86s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-217977 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-217977 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (95.665405ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4ec5a785-5da2-402a-9bf1-36b3b9b5a1ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-217977] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"53339102-97fd-4e75-b9ba-19206ea84de4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21767"}}
	{"specversion":"1.0","id":"c9dafce6-4db4-4e4d-b7c4-4a5813d7966c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"dea5ad9d-0b88-4633-aee5-7fad6dac50e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21767-1274243/kubeconfig"}}
	{"specversion":"1.0","id":"a5187ecb-dc91-4486-8198-656c11ef543b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-1274243/.minikube"}}
	{"specversion":"1.0","id":"3dd6cd06-6ef4-4eab-964a-95fc00d1d93e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"1f6ac15c-940b-4fbf-8310-8c99848a6f56","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"bf333cab-f843-47b1-b664-e1e390918695","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-217977" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-217977
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (37.91s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-064376 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-064376 --network=: (35.66544896s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-064376" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-064376
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-064376: (2.222190947s)
--- PASS: TestKicCustomNetwork/create_custom_network (37.91s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (37.37s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-940566 --network=bridge
E1018 09:04:38.616623 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/functional-441731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-940566 --network=bridge: (35.278950572s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-940566" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-940566
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-940566: (2.067804987s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (37.37s)

                                                
                                    
x
+
TestKicExistingNetwork (35.43s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1018 09:04:50.115992 1276097 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1018 09:04:50.131338 1276097 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1018 09:04:50.131410 1276097 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1018 09:04:50.131430 1276097 cli_runner.go:164] Run: docker network inspect existing-network
W1018 09:04:50.146529 1276097 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1018 09:04:50.146559 1276097 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1018 09:04:50.146574 1276097 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1018 09:04:50.146681 1276097 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1018 09:04:50.163969 1276097 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-521f8f572997 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3a:7e:e5:c0:67:29} reservation:<nil>}
I1018 09:04:50.169533 1276097 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I1018 09:04:50.169949 1276097 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40004958d0}
I1018 09:04:50.170421 1276097 network_create.go:124] attempt to create docker network existing-network 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
I1018 09:04:50.170505 1276097 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1018 09:04:50.229410 1276097 network_create.go:108] docker network existing-network 192.168.67.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-436707 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-436707 --network=existing-network: (33.213502472s)
helpers_test.go:175: Cleaning up "existing-network-436707" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-436707
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-436707: (2.07158715s)
I1018 09:05:25.530519 1276097 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (35.43s)

                                                
                                    
x
+
TestKicCustomSubnet (37.13s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-603338 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-603338 --subnet=192.168.60.0/24: (34.844773436s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-603338 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-603338" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-603338
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-603338: (2.258537426s)
--- PASS: TestKicCustomSubnet (37.13s)

                                                
                                    
x
+
TestKicStaticIP (34.3s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-912188 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-912188 --static-ip=192.168.200.200: (31.934637313s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-912188 ip
helpers_test.go:175: Cleaning up "static-ip-912188" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-912188
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-912188: (2.217139827s)
--- PASS: TestKicStaticIP (34.30s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (74.66s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-834309 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-834309 --driver=docker  --container-runtime=crio: (31.816238254s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-837280 --driver=docker  --container-runtime=crio
E1018 09:07:43.344816 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-837280 --driver=docker  --container-runtime=crio: (37.086042066s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-834309
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-837280
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-837280" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-837280
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-837280: (2.153492758s)
helpers_test.go:175: Cleaning up "first-834309" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-834309
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-834309: (2.056823614s)
--- PASS: TestMinikubeProfile (74.66s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.05s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-006322 --memory=3072 --mount-string /tmp/TestMountStartserial448468478/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-006322 --memory=3072 --mount-string /tmp/TestMountStartserial448468478/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.05040959s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.05s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-006322 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.34s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-008455 --memory=3072 --mount-string /tmp/TestMountStartserial448468478/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-008455 --memory=3072 --mount-string /tmp/TestMountStartserial448468478/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.341645878s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.34s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-008455 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-006322 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-006322 --alsologtostderr -v=5: (1.693737279s)
--- PASS: TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-008455 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-008455
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-008455: (1.28986622s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.66s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-008455
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-008455: (6.656618776s)
--- PASS: TestMountStart/serial/RestartStopped (7.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-008455 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (140.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-799900 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1018 09:09:38.615233 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/functional-441731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-799900 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m20.329105504s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-799900 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (140.85s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-799900 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-799900 -- rollout status deployment/busybox
E1018 09:10:46.420175 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-799900 -- rollout status deployment/busybox: (3.353492307s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-799900 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-799900 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-799900 -- exec busybox-7b57f96db7-6sdnt -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-799900 -- exec busybox-7b57f96db7-qc976 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-799900 -- exec busybox-7b57f96db7-6sdnt -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-799900 -- exec busybox-7b57f96db7-qc976 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-799900 -- exec busybox-7b57f96db7-6sdnt -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-799900 -- exec busybox-7b57f96db7-qc976 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.22s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-799900 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-799900 -- exec busybox-7b57f96db7-6sdnt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-799900 -- exec busybox-7b57f96db7-6sdnt -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-799900 -- exec busybox-7b57f96db7-qc976 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-799900 -- exec busybox-7b57f96db7-qc976 -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (59.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-799900 -v=5 --alsologtostderr
E1018 09:11:01.680884 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/functional-441731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-799900 -v=5 --alsologtostderr: (58.813586639s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-799900 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (59.55s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-799900 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-799900 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-799900 cp testdata/cp-test.txt multinode-799900:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-799900 ssh -n multinode-799900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-799900 cp multinode-799900:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile270648906/001/cp-test_multinode-799900.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-799900 ssh -n multinode-799900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-799900 cp multinode-799900:/home/docker/cp-test.txt multinode-799900-m02:/home/docker/cp-test_multinode-799900_multinode-799900-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-799900 ssh -n multinode-799900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-799900 ssh -n multinode-799900-m02 "sudo cat /home/docker/cp-test_multinode-799900_multinode-799900-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-799900 cp multinode-799900:/home/docker/cp-test.txt multinode-799900-m03:/home/docker/cp-test_multinode-799900_multinode-799900-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-799900 ssh -n multinode-799900 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-799900 ssh -n multinode-799900-m03 "sudo cat /home/docker/cp-test_multinode-799900_multinode-799900-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-799900 cp testdata/cp-test.txt multinode-799900-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-799900 ssh -n multinode-799900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-799900 cp multinode-799900-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile270648906/001/cp-test_multinode-799900-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-799900 ssh -n multinode-799900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-799900 cp multinode-799900-m02:/home/docker/cp-test.txt multinode-799900:/home/docker/cp-test_multinode-799900-m02_multinode-799900.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-799900 ssh -n multinode-799900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-799900 ssh -n multinode-799900 "sudo cat /home/docker/cp-test_multinode-799900-m02_multinode-799900.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-799900 cp multinode-799900-m02:/home/docker/cp-test.txt multinode-799900-m03:/home/docker/cp-test_multinode-799900-m02_multinode-799900-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-799900 ssh -n multinode-799900-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-799900 ssh -n multinode-799900-m03 "sudo cat /home/docker/cp-test_multinode-799900-m02_multinode-799900-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-799900 cp testdata/cp-test.txt multinode-799900-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-799900 ssh -n multinode-799900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-799900 cp multinode-799900-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile270648906/001/cp-test_multinode-799900-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-799900 ssh -n multinode-799900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-799900 cp multinode-799900-m03:/home/docker/cp-test.txt multinode-799900:/home/docker/cp-test_multinode-799900-m03_multinode-799900.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-799900 ssh -n multinode-799900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-799900 ssh -n multinode-799900 "sudo cat /home/docker/cp-test_multinode-799900-m03_multinode-799900.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-799900 cp multinode-799900-m03:/home/docker/cp-test.txt multinode-799900-m02:/home/docker/cp-test_multinode-799900-m03_multinode-799900-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-799900 ssh -n multinode-799900-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-799900 ssh -n multinode-799900-m02 "sudo cat /home/docker/cp-test_multinode-799900-m03_multinode-799900-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.81s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-799900 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-799900 node stop m03: (1.32455744s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-799900 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-799900 status: exit status 7 (555.271148ms)

                                                
                                                
-- stdout --
	multinode-799900
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-799900-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-799900-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-799900 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-799900 status --alsologtostderr: exit status 7 (567.109606ms)

                                                
                                                
-- stdout --
	multinode-799900
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-799900-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-799900-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:12:03.728253 1381934 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:12:03.728420 1381934 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:12:03.728430 1381934 out.go:374] Setting ErrFile to fd 2...
	I1018 09:12:03.728434 1381934 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:12:03.728710 1381934 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 09:12:03.728930 1381934 out.go:368] Setting JSON to false
	I1018 09:12:03.728984 1381934 mustload.go:65] Loading cluster: multinode-799900
	I1018 09:12:03.729043 1381934 notify.go:220] Checking for updates...
	I1018 09:12:03.729416 1381934 config.go:182] Loaded profile config "multinode-799900": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:12:03.729431 1381934 status.go:174] checking status of multinode-799900 ...
	I1018 09:12:03.729926 1381934 cli_runner.go:164] Run: docker container inspect multinode-799900 --format={{.State.Status}}
	I1018 09:12:03.752794 1381934 status.go:371] multinode-799900 host status = "Running" (err=<nil>)
	I1018 09:12:03.752820 1381934 host.go:66] Checking if "multinode-799900" exists ...
	I1018 09:12:03.753104 1381934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-799900
	I1018 09:12:03.791914 1381934 host.go:66] Checking if "multinode-799900" exists ...
	I1018 09:12:03.792277 1381934 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:12:03.792352 1381934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-799900
	I1018 09:12:03.813086 1381934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34726 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/multinode-799900/id_rsa Username:docker}
	I1018 09:12:03.917024 1381934 ssh_runner.go:195] Run: systemctl --version
	I1018 09:12:03.923495 1381934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:12:03.936763 1381934 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:12:04.001213 1381934 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-18 09:12:03.992163101 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:12:04.001901 1381934 kubeconfig.go:125] found "multinode-799900" server: "https://192.168.58.2:8443"
	I1018 09:12:04.001956 1381934 api_server.go:166] Checking apiserver status ...
	I1018 09:12:04.002007 1381934 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:12:04.017328 1381934 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1251/cgroup
	I1018 09:12:04.026583 1381934 api_server.go:182] apiserver freezer: "9:freezer:/docker/4e4fb67f06c7bab8ba521544027e5e8f2c1e20c6a14af8530bc68c188313ee5f/crio/crio-8c88e141d2398a6b65d778013ad88dd6031cdd1677e13d26390147939d93346e"
	I1018 09:12:04.026693 1381934 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4e4fb67f06c7bab8ba521544027e5e8f2c1e20c6a14af8530bc68c188313ee5f/crio/crio-8c88e141d2398a6b65d778013ad88dd6031cdd1677e13d26390147939d93346e/freezer.state
	I1018 09:12:04.035378 1381934 api_server.go:204] freezer state: "THAWED"
	I1018 09:12:04.035412 1381934 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1018 09:12:04.044825 1381934 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1018 09:12:04.044879 1381934 status.go:463] multinode-799900 apiserver status = Running (err=<nil>)
	I1018 09:12:04.044894 1381934 status.go:176] multinode-799900 status: &{Name:multinode-799900 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 09:12:04.044913 1381934 status.go:174] checking status of multinode-799900-m02 ...
	I1018 09:12:04.045306 1381934 cli_runner.go:164] Run: docker container inspect multinode-799900-m02 --format={{.State.Status}}
	I1018 09:12:04.063292 1381934 status.go:371] multinode-799900-m02 host status = "Running" (err=<nil>)
	I1018 09:12:04.063318 1381934 host.go:66] Checking if "multinode-799900-m02" exists ...
	I1018 09:12:04.063628 1381934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-799900-m02
	I1018 09:12:04.081869 1381934 host.go:66] Checking if "multinode-799900-m02" exists ...
	I1018 09:12:04.082232 1381934 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:12:04.082280 1381934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-799900-m02
	I1018 09:12:04.100780 1381934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34731 SSHKeyPath:/home/jenkins/minikube-integration/21767-1274243/.minikube/machines/multinode-799900-m02/id_rsa Username:docker}
	I1018 09:12:04.201336 1381934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:12:04.214250 1381934 status.go:176] multinode-799900-m02 status: &{Name:multinode-799900-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1018 09:12:04.214284 1381934 status.go:174] checking status of multinode-799900-m03 ...
	I1018 09:12:04.214596 1381934 cli_runner.go:164] Run: docker container inspect multinode-799900-m03 --format={{.State.Status}}
	I1018 09:12:04.232029 1381934 status.go:371] multinode-799900-m03 host status = "Stopped" (err=<nil>)
	I1018 09:12:04.232049 1381934 status.go:384] host is not running, skipping remaining checks
	I1018 09:12:04.232055 1381934 status.go:176] multinode-799900-m03 status: &{Name:multinode-799900-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.45s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-799900 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-799900 node start m03 -v=5 --alsologtostderr: (7.42570572s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-799900 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.22s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (77.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-799900
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-799900
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-799900: (25.042328817s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-799900 --wait=true -v=5 --alsologtostderr
E1018 09:12:43.344193 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-799900 --wait=true -v=5 --alsologtostderr: (52.304032536s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-799900
--- PASS: TestMultiNode/serial/RestartKeepsNodes (77.47s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-799900 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-799900 node delete m03: (4.965060237s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-799900 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.66s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-799900 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-799900 stop: (23.799655672s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-799900 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-799900 status: exit status 7 (94.634667ms)

                                                
                                                
-- stdout --
	multinode-799900
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-799900-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-799900 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-799900 status --alsologtostderr: exit status 7 (91.032795ms)

                                                
                                                
-- stdout --
	multinode-799900
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-799900-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:13:59.530747 1389707 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:13:59.530918 1389707 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:13:59.530931 1389707 out.go:374] Setting ErrFile to fd 2...
	I1018 09:13:59.530938 1389707 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:13:59.531206 1389707 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 09:13:59.531422 1389707 out.go:368] Setting JSON to false
	I1018 09:13:59.531474 1389707 mustload.go:65] Loading cluster: multinode-799900
	I1018 09:13:59.531537 1389707 notify.go:220] Checking for updates...
	I1018 09:13:59.532878 1389707 config.go:182] Loaded profile config "multinode-799900": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:13:59.532908 1389707 status.go:174] checking status of multinode-799900 ...
	I1018 09:13:59.533579 1389707 cli_runner.go:164] Run: docker container inspect multinode-799900 --format={{.State.Status}}
	I1018 09:13:59.550811 1389707 status.go:371] multinode-799900 host status = "Stopped" (err=<nil>)
	I1018 09:13:59.550834 1389707 status.go:384] host is not running, skipping remaining checks
	I1018 09:13:59.550841 1389707 status.go:176] multinode-799900 status: &{Name:multinode-799900 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 09:13:59.550871 1389707 status.go:174] checking status of multinode-799900-m02 ...
	I1018 09:13:59.551177 1389707 cli_runner.go:164] Run: docker container inspect multinode-799900-m02 --format={{.State.Status}}
	I1018 09:13:59.571721 1389707 status.go:371] multinode-799900-m02 host status = "Stopped" (err=<nil>)
	I1018 09:13:59.571746 1389707 status.go:384] host is not running, skipping remaining checks
	I1018 09:13:59.571753 1389707 status.go:176] multinode-799900-m02 status: &{Name:multinode-799900-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.99s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (54.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-799900 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1018 09:14:38.614396 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/functional-441731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-799900 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (54.181408242s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-799900 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (54.92s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-799900
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-799900-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-799900-m02 --driver=docker  --container-runtime=crio: exit status 14 (99.184082ms)

                                                
                                                
-- stdout --
	* [multinode-799900-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21767
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21767-1274243/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-1274243/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-799900-m02' is duplicated with machine name 'multinode-799900-m02' in profile 'multinode-799900'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-799900-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-799900-m03 --driver=docker  --container-runtime=crio: (34.774155448s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-799900
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-799900: exit status 80 (320.168057ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-799900 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-799900-m03 already exists in multinode-799900-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-799900-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-799900-m03: (2.326613932s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.58s)

                                                
                                    
x
+
TestPreload (122.21s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-815969 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-815969 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (59.997716375s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-815969 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-815969 image pull gcr.io/k8s-minikube/busybox: (2.282646805s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-815969
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-815969: (5.908479438s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-815969 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-815969 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (51.368624142s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-815969 image list
helpers_test.go:175: Cleaning up "test-preload-815969" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-815969
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-815969: (2.412538734s)
--- PASS: TestPreload (122.21s)

                                                
                                    
x
+
TestScheduledStopUnix (106.85s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-823174 --memory=3072 --driver=docker  --container-runtime=crio
E1018 09:17:43.344242 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-823174 --memory=3072 --driver=docker  --container-runtime=crio: (30.776950077s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-823174 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-823174 -n scheduled-stop-823174
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-823174 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1018 09:18:10.021485 1276097 retry.go:31] will retry after 138.888µs: open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/scheduled-stop-823174/pid: no such file or directory
I1018 09:18:10.021784 1276097 retry.go:31] will retry after 183.567µs: open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/scheduled-stop-823174/pid: no such file or directory
I1018 09:18:10.023028 1276097 retry.go:31] will retry after 188.293µs: open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/scheduled-stop-823174/pid: no such file or directory
I1018 09:18:10.024151 1276097 retry.go:31] will retry after 329.349µs: open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/scheduled-stop-823174/pid: no such file or directory
I1018 09:18:10.025326 1276097 retry.go:31] will retry after 379.588µs: open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/scheduled-stop-823174/pid: no such file or directory
I1018 09:18:10.026529 1276097 retry.go:31] will retry after 519.832µs: open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/scheduled-stop-823174/pid: no such file or directory
I1018 09:18:10.027672 1276097 retry.go:31] will retry after 1.196572ms: open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/scheduled-stop-823174/pid: no such file or directory
I1018 09:18:10.029928 1276097 retry.go:31] will retry after 1.400573ms: open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/scheduled-stop-823174/pid: no such file or directory
I1018 09:18:10.032146 1276097 retry.go:31] will retry after 3.463791ms: open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/scheduled-stop-823174/pid: no such file or directory
I1018 09:18:10.036414 1276097 retry.go:31] will retry after 4.087374ms: open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/scheduled-stop-823174/pid: no such file or directory
I1018 09:18:10.040684 1276097 retry.go:31] will retry after 4.85657ms: open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/scheduled-stop-823174/pid: no such file or directory
I1018 09:18:10.047274 1276097 retry.go:31] will retry after 5.389614ms: open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/scheduled-stop-823174/pid: no such file or directory
I1018 09:18:10.053535 1276097 retry.go:31] will retry after 11.373877ms: open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/scheduled-stop-823174/pid: no such file or directory
I1018 09:18:10.065862 1276097 retry.go:31] will retry after 25.598223ms: open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/scheduled-stop-823174/pid: no such file or directory
I1018 09:18:10.092118 1276097 retry.go:31] will retry after 31.764671ms: open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/scheduled-stop-823174/pid: no such file or directory
I1018 09:18:10.124353 1276097 retry.go:31] will retry after 60.35398ms: open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/scheduled-stop-823174/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-823174 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-823174 -n scheduled-stop-823174
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-823174
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-823174 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-823174
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-823174: exit status 7 (73.748125ms)

                                                
                                                
-- stdout --
	scheduled-stop-823174
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-823174 -n scheduled-stop-823174
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-823174 -n scheduled-stop-823174: exit status 7 (67.612187ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-823174" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-823174
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-823174: (4.459255923s)
--- PASS: TestScheduledStopUnix (106.85s)

                                                
                                    
x
+
TestInsufficientStorage (14.92s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-194172 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-194172 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (12.314909892s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1f751dac-b31d-4fa7-a88b-84deae3520c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-194172] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6995ad6c-1d46-4635-ac37-92bc1880c571","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21767"}}
	{"specversion":"1.0","id":"35e3cff1-441a-4265-b643-0bf5fce9ce63","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1bc651cb-3e65-4a7f-9af9-b51b5909734e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21767-1274243/kubeconfig"}}
	{"specversion":"1.0","id":"3f4f385e-b0c5-430d-82cc-371d350cecd0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-1274243/.minikube"}}
	{"specversion":"1.0","id":"b7466d8c-0817-4cdc-862c-52c5002da106","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"33dfec76-e407-4862-ae7d-d68bd61f09b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d327f7a7-7888-4086-b32a-ea8735b582d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"2696581b-0d09-47ed-b5cf-0cad2215ae88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"8528f231-9302-4930-a815-4127596e1022","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"27157f8e-cde4-4000-99f5-2c7740853d2f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"d1502f61-1440-4847-9452-8032c16da86a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-194172\" primary control-plane node in \"insufficient-storage-194172\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b03d4159-d210-4c48-9c80-35b0040304b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760609789-21757 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"2bf045e4-0fc9-4f72-9702-d1b09007e0ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"776490d6-eac1-4adc-95ec-eed218824670","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-194172 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-194172 --output=json --layout=cluster: exit status 7 (301.233291ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-194172","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-194172","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1018 09:19:38.172566 1406318 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-194172" does not appear in /home/jenkins/minikube-integration/21767-1274243/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-194172 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-194172 --output=json --layout=cluster: exit status 7 (306.072551ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-194172","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-194172","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1018 09:19:38.478044 1406384 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-194172" does not appear in /home/jenkins/minikube-integration/21767-1274243/kubeconfig
	E1018 09:19:38.488027 1406384 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/insufficient-storage-194172/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-194172" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-194172
E1018 09:19:38.614683 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/functional-441731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-194172: (2.000183607s)
--- PASS: TestInsufficientStorage (14.92s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (55.05s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3053589276 start -p running-upgrade-499663 --memory=3072 --vm-driver=docker  --container-runtime=crio
E1018 09:24:38.614890 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/functional-441731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3053589276 start -p running-upgrade-499663 --memory=3072 --vm-driver=docker  --container-runtime=crio: (33.1091672s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-499663 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-499663 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (19.145531974s)
helpers_test.go:175: Cleaning up "running-upgrade-499663" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-499663
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-499663: (1.972956784s)
--- PASS: TestRunningBinaryUpgrade (55.05s)

                                                
                                    
x
+
TestKubernetesUpgrade (353.84s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-757858 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-757858 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (40.406513469s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-757858
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-757858: (1.316723443s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-757858 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-757858 status --format={{.Host}}: exit status 7 (70.565639ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-757858 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1018 09:22:43.343990 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-757858 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m38.041027384s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-757858 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-757858 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-757858 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (147.063739ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-757858] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21767
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21767-1274243/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-1274243/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-757858
	    minikube start -p kubernetes-upgrade-757858 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7578582 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-757858 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-757858 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1018 09:27:26.421903 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-757858 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (31.535770689s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-757858" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-757858
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-757858: (2.195675218s)
--- PASS: TestKubernetesUpgrade (353.84s)

                                                
                                    
x
+
TestMissingContainerUpgrade (118.5s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.3091835972 start -p missing-upgrade-995648 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.3091835972 start -p missing-upgrade-995648 --memory=3072 --driver=docker  --container-runtime=crio: (1m5.205166703s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-995648
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-995648
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-995648 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-995648 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (48.569575448s)
helpers_test.go:175: Cleaning up "missing-upgrade-995648" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-995648
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-995648: (2.039107325s)
--- PASS: TestMissingContainerUpgrade (118.50s)

                                                
                                    
x
+
TestPause/serial/Start (90.28s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-285945 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-285945 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m30.278723902s)
--- PASS: TestPause/serial/Start (90.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-035766 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-035766 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (124.919474ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-035766] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21767
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21767-1274243/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-1274243/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-035766 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-035766 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (42.519224782s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-035766 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (38.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-035766 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-035766 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (36.304983987s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-035766 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-035766 status -o json: exit status 2 (514.751288ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-035766","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-035766
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-035766: (2.059445144s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (38.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-035766 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-035766 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (9.024683229s)
--- PASS: TestNoKubernetes/serial/Start (9.02s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (33.99s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-285945 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-285945 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (33.952291605s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (33.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-035766 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-035766 "sudo systemctl is-active --quiet service kubelet": exit status 1 (343.265751ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-035766
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-035766: (1.374816989s)
--- PASS: TestNoKubernetes/serial/Stop (1.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-035766 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-035766 --driver=docker  --container-runtime=crio: (7.825699457s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-035766 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-035766 "sudo systemctl is-active --quiet service kubelet": exit status 1 (460.41838ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.46s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.91s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.91s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (59.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.991123507 start -p stopped-upgrade-798609 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.991123507 start -p stopped-upgrade-798609 --memory=3072 --vm-driver=docker  --container-runtime=crio: (36.24619271s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.991123507 -p stopped-upgrade-798609 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.991123507 -p stopped-upgrade-798609 stop: (1.311325899s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-798609 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-798609 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (21.737734074s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (59.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.27s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-798609
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-798609: (1.265634859s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-275703 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-275703 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (204.163262ms)

                                                
                                                
-- stdout --
	* [false-275703] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21767
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21767-1274243/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-1274243/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:26:01.637797 1441416 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:26:01.637927 1441416 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:26:01.637938 1441416 out.go:374] Setting ErrFile to fd 2...
	I1018 09:26:01.637943 1441416 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:26:01.638181 1441416 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21767-1274243/.minikube/bin
	I1018 09:26:01.638581 1441416 out.go:368] Setting JSON to false
	I1018 09:26:01.639442 1441416 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":40109,"bootTime":1760739453,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1018 09:26:01.639505 1441416 start.go:141] virtualization:  
	I1018 09:26:01.642977 1441416 out.go:179] * [false-275703] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1018 09:26:01.645858 1441416 out.go:179]   - MINIKUBE_LOCATION=21767
	I1018 09:26:01.645944 1441416 notify.go:220] Checking for updates...
	I1018 09:26:01.651621 1441416 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:26:01.654519 1441416 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21767-1274243/kubeconfig
	I1018 09:26:01.657365 1441416 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21767-1274243/.minikube
	I1018 09:26:01.660317 1441416 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1018 09:26:01.663179 1441416 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:26:01.666543 1441416 config.go:182] Loaded profile config "kubernetes-upgrade-757858": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:26:01.666693 1441416 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:26:01.694572 1441416 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1018 09:26:01.694731 1441416 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:26:01.772881 1441416 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-18 09:26:01.756952218 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1018 09:26:01.773007 1441416 docker.go:318] overlay module found
	I1018 09:26:01.776101 1441416 out.go:179] * Using the docker driver based on user configuration
	I1018 09:26:01.778883 1441416 start.go:305] selected driver: docker
	I1018 09:26:01.778906 1441416 start.go:925] validating driver "docker" against <nil>
	I1018 09:26:01.778922 1441416 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:26:01.782437 1441416 out.go:203] 
	W1018 09:26:01.785418 1441416 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1018 09:26:01.788293 1441416 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-275703 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-275703

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-275703

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-275703

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-275703

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-275703

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-275703

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-275703

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-275703

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-275703

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-275703

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275703"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275703"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275703"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-275703

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275703"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275703"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-275703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-275703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-275703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-275703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-275703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-275703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-275703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-275703" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275703"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275703"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275703"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275703"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275703"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-275703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-275703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-275703" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275703"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275703"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275703"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275703"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275703"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 18 Oct 2025 09:22:52 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-757858
contexts:
- context:
cluster: kubernetes-upgrade-757858
user: kubernetes-upgrade-757858
name: kubernetes-upgrade-757858
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-757858
user:
client-certificate: /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/kubernetes-upgrade-757858/client.crt
client-key: /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/kubernetes-upgrade-757858/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-275703

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275703"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275703"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275703"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275703"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275703"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275703"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275703"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275703"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275703"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275703"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275703"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275703"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275703"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275703"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275703"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275703"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275703"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-275703"

                                                
                                                
----------------------- debugLogs end: false-275703 [took: 3.250924057s] --------------------------------
helpers_test.go:175: Cleaning up "false-275703" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-275703
--- PASS: TestNetworkPlugins/group/false (3.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (59.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-136598 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-136598 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (59.057072184s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (59.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-136598 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b6b0e2eb-5c14-4392-9e93-758c737f224d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [b6b0e2eb-5c14-4392-9e93-758c737f224d] Running
E1018 09:29:38.614744 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/functional-441731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003706561s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-136598 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-136598 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-136598 --alsologtostderr -v=3: (11.99635054s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-136598 -n old-k8s-version-136598
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-136598 -n old-k8s-version-136598: exit status 7 (78.193533ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-136598 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (49.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-136598 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-136598 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (49.40864879s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-136598 -n old-k8s-version-136598
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (49.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-c2x4c" [e97c2dca-e2c5-4d41-8dcc-b60fda13fea8] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003654744s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-c2x4c" [e97c2dca-e2c5-4d41-8dcc-b60fda13fea8] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003480867s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-136598 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-136598 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (76.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-886951 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-886951 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m16.231950807s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (76.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (91.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-559379 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-559379 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m31.625648695s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (91.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-886951 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [fb073a56-3a60-4f54-b138-9cee2302d24a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [fb073a56-3a60-4f54-b138-9cee2302d24a] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003868924s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-886951 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-886951 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-886951 --alsologtostderr -v=3: (12.053923299s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-886951 -n no-preload-886951
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-886951 -n no-preload-886951: exit status 7 (75.263487ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-886951 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (52.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-886951 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-886951 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (51.641469692s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-886951 -n no-preload-886951
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (52.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-559379 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a8bb1150-d2bb-4277-91a3-9eb18dfdfc48] Pending
helpers_test.go:352: "busybox" [a8bb1150-d2bb-4277-91a3-9eb18dfdfc48] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a8bb1150-d2bb-4277-91a3-9eb18dfdfc48] Running
E1018 09:32:43.344024 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003323705s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-559379 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-559379 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-559379 --alsologtostderr -v=3: (12.289269723s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-559379 -n embed-certs-559379
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-559379 -n embed-certs-559379: exit status 7 (76.913361ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-559379 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (53.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-559379 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-559379 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (53.540147816s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-559379 -n embed-certs-559379
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (53.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-smc6z" [6d509118-77b8-4441-b5b7-a4389460ffb0] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002971781s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-smc6z" [6d509118-77b8-4441-b5b7-a4389460ffb0] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00285441s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-886951 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-886951 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-593480 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-593480 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m24.909218621s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-d75lm" [91f48b73-7d2e-4de2-a40c-43052918b773] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003843876s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-d75lm" [91f48b73-7d2e-4de2-a40c-43052918b773] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00441902s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-559379 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-559379 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (41.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-250274 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1018 09:34:29.848273 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/old-k8s-version-136598/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:34:29.854589 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/old-k8s-version-136598/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:34:29.865935 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/old-k8s-version-136598/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:34:29.887256 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/old-k8s-version-136598/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:34:29.928684 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/old-k8s-version-136598/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:34:30.010423 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/old-k8s-version-136598/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:34:30.172011 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/old-k8s-version-136598/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:34:30.493515 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/old-k8s-version-136598/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:34:31.135756 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/old-k8s-version-136598/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:34:32.417639 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/old-k8s-version-136598/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:34:34.979661 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/old-k8s-version-136598/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:34:38.614969 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/functional-441731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:34:40.101379 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/old-k8s-version-136598/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:34:50.342710 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/old-k8s-version-136598/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-250274 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (41.848071966s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (41.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-250274 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-250274 --alsologtostderr -v=3: (1.373461842s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-250274 -n newest-cni-250274
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-250274 -n newest-cni-250274: exit status 7 (87.69454ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-250274 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (18.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-250274 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1018 09:35:10.824125 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/old-k8s-version-136598/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-250274 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (17.825428451s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-250274 -n newest-cni-250274
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (18.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-593480 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b16ad816-3da6-4828-b35a-f8c0f32a7093] Pending
helpers_test.go:352: "busybox" [b16ad816-3da6-4828-b35a-f8c0f32a7093] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [b16ad816-3da6-4828-b35a-f8c0f32a7093] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003558312s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-593480 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-250274 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-593480 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-593480 --alsologtostderr -v=3: (13.805505674s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (85.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-275703 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-275703 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m25.68554558s)
--- PASS: TestNetworkPlugins/group/auto/Start (85.69s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-593480 -n default-k8s-diff-port-593480
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-593480 -n default-k8s-diff-port-593480: exit status 7 (91.750238ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-593480 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (62.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-593480 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1018 09:35:51.786376 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/old-k8s-version-136598/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-593480 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m2.180055414s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-593480 -n default-k8s-diff-port-593480
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (62.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-b2xsq" [b6bcdba7-3aa5-4913-b828-bba9ad382a0a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00385065s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-b2xsq" [b6bcdba7-3aa5-4913-b828-bba9ad382a0a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003694415s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-593480 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-593480 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-275703 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-275703 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-c988x" [ef406dec-a44d-49f5-8f69-f72c6e5b5cfb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-c988x" [ef406dec-a44d-49f5-8f69-f72c6e5b5cfb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.010696414s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (85.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-275703 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-275703 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m25.957624963s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (85.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-275703 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-275703 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E1018 09:37:12.433487 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:37:12.440923 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:37:12.453296 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:37:12.475025 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:37:12.516404 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-275703 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1018 09:37:12.598565 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (63.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-275703 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1018 09:37:43.343741 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/addons-718596/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:37:53.426712 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-275703 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m3.828447385s)
--- PASS: TestNetworkPlugins/group/calico/Start (63.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-6dl9z" [8d20e0fa-2187-4fe6-9745-3dd9c4701b6a] Running
E1018 09:38:34.388748 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003958128s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-275703 "pgrep -a kubelet"
I1018 09:38:37.354926 1276097 config.go:182] Loaded profile config "kindnet-275703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-275703 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-282zj" [720f9021-e640-4a06-80b8-e4d20bc2a1e2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-282zj" [720f9021-e640-4a06-80b8-e4d20bc2a1e2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004373602s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-b5tpb" [53cd271c-e25d-4547-8e94-8dbcbf379a04] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-b5tpb" [53cd271c-e25d-4547-8e94-8dbcbf379a04] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004020677s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-275703 "pgrep -a kubelet"
I1018 09:38:47.546152 1276097 config.go:182] Loaded profile config "calico-275703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-275703 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-csmq7" [f7679533-52dc-41bb-835d-c173914b172f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-csmq7" [f7679533-52dc-41bb-835d-c173914b172f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.011330329s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-275703 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-275703 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-275703 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-275703 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-275703 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-275703 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (72.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-275703 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-275703 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m12.21288208s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (72.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (57.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-275703 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1018 09:39:29.848005 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/old-k8s-version-136598/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:39:38.615094 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/functional-441731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:39:56.310781 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:39:57.550079 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/old-k8s-version-136598/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:40:13.592449 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:40:13.598751 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:40:13.610201 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:40:13.631603 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:40:13.673241 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:40:13.754797 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:40:13.916388 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:40:14.238073 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:40:14.880031 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:40:16.161446 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:40:18.722795 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-275703 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (57.256117354s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (57.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-275703 "pgrep -a kubelet"
I1018 09:40:21.910342 1276097 config.go:182] Loaded profile config "enable-default-cni-275703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-275703 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jrxhp" [7cc8c180-4f7b-43bb-8478-f71db7acde25] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-jrxhp" [7cc8c180-4f7b-43bb-8478-f71db7acde25] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003558786s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-275703 "pgrep -a kubelet"
E1018 09:40:23.845087 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1018 09:40:23.892935 1276097 config.go:182] Loaded profile config "custom-flannel-275703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-275703 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-sdc58" [3237a96a-d3a7-4d78-ac77-7a3a03da45d9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-sdc58" [3237a96a-d3a7-4d78-ac77-7a3a03da45d9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003343677s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-275703 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-275703 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-275703 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-275703 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-275703 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-275703 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (67.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-275703 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-275703 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m7.592867233s)
--- PASS: TestNetworkPlugins/group/flannel/Start (67.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (79.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-275703 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1018 09:41:35.531071 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/default-k8s-diff-port-593480/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:42:01.124808 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:42:01.131207 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:42:01.142690 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:42:01.164174 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:42:01.205584 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:42:01.287126 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:42:01.448741 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:42:01.770696 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:42:02.412763 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:42:03.694336 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-275703 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m19.609713149s)
--- PASS: TestNetworkPlugins/group/bridge/Start (79.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-kgmb9" [303bc3c3-5444-4a02-b8a4-0452831dcf89] Running
E1018 09:42:06.255661 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004089934s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-275703 "pgrep -a kubelet"
I1018 09:42:10.606374 1276097 config.go:182] Loaded profile config "flannel-275703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-275703 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-brs25" [ccc580dd-33cc-4240-93e4-572df8280d86] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1018 09:42:11.377379 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:42:12.432983 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/no-preload-886951/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-brs25" [ccc580dd-33cc-4240-93e4-572df8280d86] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003930363s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-275703 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-275703 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-275703 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-275703 "pgrep -a kubelet"
I1018 09:42:21.492511 1276097 config.go:182] Loaded profile config "bridge-275703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-275703 replace --force -f testdata/netcat-deployment.yaml
E1018 09:42:21.619276 1276097 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/auto-275703/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zp7wq" [5d1bae7d-ed3f-4f3b-a953-9ae5f1a39593] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-zp7wq" [5d1bae7d-ed3f-4f3b-a953-9ae5f1a39593] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.00501835s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-275703 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-275703 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-275703 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (31/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.42s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-695796 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-695796" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-695796
--- SKIP: TestDownloadOnlyKic (0.42s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-877810" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-877810
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-275703 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-275703

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-275703

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-275703

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-275703

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-275703

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-275703

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-275703

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-275703

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-275703

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-275703

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275703"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275703"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275703"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-275703

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275703"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275703"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-275703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-275703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-275703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-275703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-275703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-275703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-275703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-275703" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275703"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275703"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275703"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275703"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275703"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-275703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-275703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-275703" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275703"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275703"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275703"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275703"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275703"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 18 Oct 2025 09:22:52 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-757858
contexts:
- context:
cluster: kubernetes-upgrade-757858
user: kubernetes-upgrade-757858
name: kubernetes-upgrade-757858
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-757858
user:
client-certificate: /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/kubernetes-upgrade-757858/client.crt
client-key: /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/kubernetes-upgrade-757858/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-275703

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275703"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275703"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275703"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275703"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275703"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275703"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275703"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275703"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275703"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275703"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275703"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275703"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275703"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275703"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275703"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275703"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275703"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-275703"

                                                
                                                
----------------------- debugLogs end: kubenet-275703 [took: 3.699034041s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-275703" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-275703
--- SKIP: TestNetworkPlugins/group/kubenet (3.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-275703 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-275703

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-275703

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-275703

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-275703

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-275703

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-275703

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-275703

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-275703

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-275703

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-275703

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275703"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275703"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275703"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-275703

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275703"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275703"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-275703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-275703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-275703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-275703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-275703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-275703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-275703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-275703" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275703"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275703"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275703"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275703"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275703"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-275703

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-275703

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-275703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-275703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-275703

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-275703

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-275703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-275703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-275703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-275703" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-275703" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275703"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275703"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275703"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275703"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275703"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21767-1274243/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 18 Oct 2025 09:22:52 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-757858
contexts:
- context:
cluster: kubernetes-upgrade-757858
user: kubernetes-upgrade-757858
name: kubernetes-upgrade-757858
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-757858
user:
client-certificate: /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/kubernetes-upgrade-757858/client.crt
client-key: /home/jenkins/minikube-integration/21767-1274243/.minikube/profiles/kubernetes-upgrade-757858/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-275703

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275703"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275703"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275703"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275703"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275703"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275703"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275703"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275703"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275703"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275703"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275703"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275703"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275703"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275703"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275703"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275703"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275703"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-275703" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-275703"

                                                
                                                
----------------------- debugLogs end: cilium-275703 [took: 3.993878025s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-275703" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-275703
--- SKIP: TestNetworkPlugins/group/cilium (4.15s)

                                                
                                    
Copied to clipboard